For a long time Threat modeling by Frank Swiderski and Window Snyder has been the Threat modeling Bible and rightfully so. Traditional Threat modeling is a software development life cycle process in which data flow diagrams are used to find points at which inputs are ‘touched’ by code.
Essentially the process was as follows: ââ,¬Ã¢Understand the Adversary's View oEntery Points oAssets oTrust Levels ââ,¬Ã¢Characterise the Security of the System oUse Scenarios oAssumptions / Dependencies oModel the System ââ,¬Ã¢Determine Threats oIdentify Threats oAnalyse Threats oDetermine vulnerabilities
In this traditional approach Threats are primarily determined by understanding the applications entry points, trust levels and assets of interest. It discovered these items by using a data-flow approach to trace data from all of the entry points, through the software to the assets identified previously.
While this is a systematic way to examine a system for weaknesses, it is based completely on an adversarial approach to the application. One of the primary problems with trying to Threat Model an application this way is that developers, and architects tend to have engrained assumptions about how the application will be used and approached.
No matter how much training or instruction you give the developer, as a human being, they can not ‘unlearn’ what they know about the assumptions of how users will interact with the application. This will cloud their judgement when trying to discover ways to damage the application.
Another problem is that the developers have to become Security Subject Matter Experts. They have to know every possible way that you can attack the application in order to apply those techniques to identify weak areas of the code. It’s hard enough keeping up with all the emerging attack methods when it’s your full time job, and almost impossible for someone who has to devote their time to writing the code. It’s not that they can’t do it, but simply that they don’t have time.
This, in part, led the Application Consulting and Engineering (ACE) team at Microsoft to revisit the practice of Threat modeling. What they’ve come up with is ACE Threat modeling, This is the methodology that Microsoft has used internally for over a year now.
ACE TM is based on a defender’s perspective. It focuses on the Threats, not the attacks. In the traditional TM method, the focus was on the Attacks. By discovering possible attacks, you uncovered a threat. In the new ACE TM, you focus on the threats from a defender’s perspective.
The Defender is in a particularly good position to understand threats to the system. The people creating the product know what is important in the product, and what how those important things are accessed. An attacker can never know what is important to the project team as well as the project team can.
ACE TM has a few core ideas that make it work well. The primary one is that Something is not a threat if a negative business impact can not be realised. For example, Compromising Customer Credit Card numbers, is a threat. There is severe and measurable negative business impact if this occurs.
Some of the kinds of things you may model as Threats in the traditional TM methods were not actual threats. For example, an overly long string might cause the method to raise an exception and the input to be rejected. While this is interesting, and an attacker could try this to see if he can expose exception information, there is no negative business impact here. This is not a threat. It may be a potential attack, but it is not a threat to the application or the business.
This clear distinction between Threats and Attacks, is important in order for a deep understanding of the threats present in an application.
The new ACE TM also approaches determining threats a bit differently. First and foremost the context these threats are applicable to must be identified. This is known as the Application Context. The application Context is determined by breaking the application down into Data, Roles, Components, and optionally External Dependencies.
The normal steps in the ACE TM methodology are:

Once you define the Context, you create Use Cases that enumerate the possible Calls. Calls are how a subject interacts with an object. There’s a bit of a formula for it:
Caller Executes Action By Component
For example:
Registered User Executes Get Order Data By Website
These calls are the core of how you determine your actual Threats.
Threats are defined as the systematic corruption of the allowable actions. The Threats are categorised by the Security AIC triad. Availability, Integrity and, Confidentiality. For example, an Integrity Threat from our above example would be:
Illegal Execution of Get Order Data using Website by Registered User
Keep in mind that these threats are based on the inclusions / exclusions model for risk assessment and security classification. We list the inclusions, and all else must be exclusions. So this methodology concentrates on the calls included in the threat model which are derived from the application context.
It is this very systematic way of deriving threats that makes this approach consistent. One of the most important things about this method is that it can be performed by developers, architects and business owners who are not security SMEs.
The methodology also incorporates Attack Libraries. When the threats are generated, the Attack Library is used to provide the mitigation to the attack types for that threat. The Attack Libraries are maintained and optimised by Security SMEs. The developers and architects only have to apply them without having to be security experts.
Luckily enough, there is a cool tool on the very near horizon that we can use to do all this, including automatically generating the threats based on our application context complete with countermeasures from out Attack Library.
This has been a tip of the iceberg introduction to the new ACE TM. I’ll have more on the new ACE Threat Modeling Methodology in the coming weeks.