TMI newsletter 11 Threat Modeling Definition of Done


Hi there, welcome to our next edition of Threat Modeling Insider.

With this newsletter, we deliver guest articles, white papers, curated articles and tips on threat modeling that help you bootstrap or elevate your security knowledge and skills.

Our “TMI” line-up:

Threat modeling’s definition of done

Guest article by Brook Schoenfield, Master Security Architect at IOActive

I’m frequently asked, “How do you know if a threat model is complete?”

Unfortunately, threat model analyses can become quite non-linear, often recursive, despite our best efforts to torture the process into a sequence of discreet steps. It might seem, on the face of it, that an analysis might go on forever, taking into consideration ever more complex, labyrinthine, even baroque attack scenarios. The fact is that while an analyst can definitely play the “what if” game nearly forever, after a certain point there’s little payback for the additional effort.

A threat model exercise is as much a journey through guesses firmly based on study and experience as it is an exercise in applying engineering certainties. In other words, the analysts’ imaginations are an important input to the process. This is why setting boundaries around attack scenarios is critical to limiting flights of paranoid fantasy; there must be exit criteria for the threat model if other development tasks are to be addressed[1].

It’s important to understand that threat modeling represents a perceived attack tree and collection of defenses for an ongoing, “living” system. Systems exist in an ecosystem that is subject to changes – sometimes sea changes.

System structures change; threat actors are creative and adaptive; research opens new techniques of attack. Each of these can and should trigger a threat model review[2]. In a sense, a threat model is never “done,” since change is constant.

Still, for any particular threat model exercise, guidelines do exist to determine when “enough is enough.”

My definition of threat modeling is “a technique to identify the attacks a system[3] must resist, and the defenses that will bring the system to a desired defensive state.” Whether you like that definition or not, it suggests built in constraints to an analysis:

  • Identify attacks that a system must resist
  • Identify the defenses that will bring the system to a desired state

In my definition, “must resist” implies an enumeration of relevant attacks. This is a constrained list, not an open-ended enquiry. “Relevant” in this context indicates that some types of attacks won’t or can’t be considered against the system under analysis.

It may be useful to point out that exploits are specific to computer language, often also memory utilization, operating system, even component and version or build of that component. Indeed, exploits tend to match vulnerabilities one-to-one. If any of the contextual conditions change, a new exploit has to be generated.

With defenses, there are also limits. Many systems needn’t defend against every potential attack, even if building a comprehensive and complete defense was possible given most organizations’ resource limits.

One of the most important rules for building defenses is that any three (sometimes even two) well-placed persons can circumvent any technical control[4], so that number makes for a very natural constraint: don’t attempt to prevent 3+ individuals from circumventing security controls where the individuals hold an ability to add their privileges together in collusion. It’s a waste of time. In such circumstances, stick with the standard practice of separation of duties. Couple that separation of duties alongside monitoring of those actions that have a potential for collusive behavior. That’s about the best one can manage against privileged actors working together to circumvent security controls.

“Desired state” in the threat modeling definition cited above indicates that near-perfect security may not be necessary. Depending upon the fielding organization’s risk tolerance and the business context of the system, sophisticated attacks might not be relevant, that is, the organization does not feel a need to protect against such attacks. This should be an obvious stopping point for a threat model analysis: attacks requiring high technical sophistication and/or high complexity need not be defended against.

While recent events have certainly pointed towards a world where any digital system might be a target of any actor, there often exist limits beyond which an organization need not go.

For instance, consumer anti-malware software typically does not protect against highly targeted, sophisticated, state-sponsored attacks. That’s because the consumers who count on these products would at the worst be collateral damage to some other target. And indeed, those who can count on being a state agency target don’t usually rely on consumer-grade software for their protection. Hence, when threat modeling such software, the analyst may discount nation-state attackers and their technically astute techniques, focusing instead upon cybercrime, which has a rather different exploitation model and risk tolerance.

Even sophisticated attackers have their limits. Amongst the exploits leaked from the NSA and CIA in April 2017 (“Shadow Brokers” leak) was a piece of code that identified the presence of a particular anti-malware vendor’s products. If the product was running, the attack did not proceed. Apparently, compromise against this defense was too much trouble, even for the USA’s premier spy agency.

The constraints for attack enumeration are as follows:

  • The risk tolerance of the organization owning/fielding the system
  • The risk tolerance of the system’s users (if any)
  • The capabilities, goals, expended effort, and risk tolerances of the enumerated set of threat agents who will wish to attack
  • The trust/risk profiles of the system’s components, including infrastructure(s) and external entities
  • The runtime/execution environment(s)
  • The existing defenses (including infrastructure defenses and services)
  • The highest sensitivity of data flowing through and being processed
  • The probability of a particular attack scenario being low enough to be discounted (probability might rise over time as new attack techniques are identified)

While somewhat creative, a threat model must be grounded in hard data. Obviously, those active attacks that can be exercised against the system under analysis will be included.

Furthermore, an analyst draws attack scenarios from relevant past exploit/vulnerability pairs even if such vulnerabilities have not yet been found in the system. It’s important to understand that even the most rigorous testing, as Edsger Dijkstra so famously quipped, “proves the system has bugs, not that it doesn’t.” If there have existed exploitable vulnerabilities in any component within the system under analysis or within similar components and technologies, even though these conditions have already been fixed, then the analysis must assume that at least a few similar issues will likely be found at some point in the future.

The analyst need not worry about particular exploit details[5]. Instead, it’s sufficient to know that, for example, there are numerous forms of web input injection that allow an attacker to misuse a web server’s content to attack the web server’s users (such as Cross-Site Scripting attacks).

Another example would be consideration of attacks that can escape a virtual runtime environment (that is, a virtual machine or container) to take control of the host operating environment (a hypervisor or operating system)[6]. This attack scenario is called a “Guest Escape” attack. Nearly every virtual runtime has experienced at least one guest escape vulnerability. Failure to account for the eventual appearance of a guest escape leaves open the potential for harm at the point in the future when such a vulnerability has been discovered.

The above scenarios are not fictional. Such attacks have been successful recently. While we may wish to gaze into our crystal balls for future attack types, the here-and-now threat landscape offers adversaries plenty of opportunity for malfeasance; a threat model must account for that set of known attacks whose exploitable conditions have been found in the past in the technologies under analysis.

An analyst may stop enumerating attacks when:

  • The attack scenarios seem demonstrably more complex than other methods of compromise that are easier and more readily available
  • The required preconditions lie well outside the range of normal or typical configuration and usage
  • Significant inside assistance (for externally-originated attacks) is required to proceed. (Insider threat is a special category that requires careful analysis across an organization. It rarely should be tackled one system analysis at a time. Separation of duties are often determined on a per-system, per-function, or per-privilege basis.)
  • Where no exploit exists for a particular vulnerability, or the vulnerability is not exposed for remote exercise or research (it’s important to periodically revisit the threat model in light of new developments)
  • Attack scenarios start to border on the ridiculous, the strained, or the dubious, or depend upon computer technologies that have yet to be invented (i.e., “science fiction”)

An analyst may stop specifying defenses when:

  • Each defense has some overlap of protection with at least one other defense
  • Each significant[7] attack vector is covered at least partially by more than a single defense

Admittedly, the criteria for completing a threat model are qualitative, as they will be for the foreseeable future. Still, the above set of guidelines and constraints can help to define some sense of completion, and provide the ability to declare “enough is enough” when an analysis bumps against one or more of these barrier conditions.

It’s important to remember that a threat model exists within a context of constant change. Hence, in a very real sense, a threat model is a living analysis of a system being implemented or running within the dynamic context of maintenance, updates, and changing threat conditions. As such, “completion” can be seen as the end of a threat modeling exercise or review; a threat model can rarely serve its purpose as a one-time exercise.

However, these very real boundaries, when met, signal that a round of analysis has reached a termination point – a “definition of done.”

Brook Schoenfield, Master Security Architect at IOActive


[1] Some of the other development tasks will likely depend upon the output of the threat model.

[2] “Review” is my carefully chosen word, and does not indicate a complete rework. The model may be reviewed with consideration to changes in the inputs to the model.

[3] “System” is defined broadly, encompassing any collection of digital and human processes that taken together provide a complete set of the functions under analysis. A system could be:

  • A piece of code intended as a part of the bootloader (in which case, the threat model would necessarily also include all the bootloader code for the machine).
  • A set of processes running in user space on an operating system (in order to create a proper threat model, a system must include those operating system functions that provide the infrastructure and runtime upon which the application is loaded and runs.
  • A set of globally distributed cloud-based services.
  • An enterprise and all of its infrastructure, digital functions, etc.

The term “system” is meant to be inclusive, such that whatever digital processes are being threat modeled are categorized as a system.

[4] The fact that manipulation or impersonation of persons with the right set of permissions can circumvent technical controls was the premise of the long-running television series, “Mission Impossible.” The later series of movies don’t hinge on this same gambit, however.

[5] However, an understanding of the mechanism of exploitation of at least one example of each particular type of attack helps to identify appropriate defensive measures.

[6] CVE-2019-5736 is a guest escape for Linux containers that was announced February 11, 2019. Failure to account for the possibility of such a vulnerability left many implementations vulnerable. Those implementations that used a specific defense were not vulnerable.

[7] For more stringent security postures, each attack scenario must be mitigated. “Significant” is meant to mean those attack vectors whose successful exercise will cause significant harm. It also implies attack vectors that are considered “credible;” that is, there is sufficient evidence that the attack vector can be exercised by an active threat actor.

“This Threat Modelling Definition Of Done white paper was written for IOActive, Inc. during Brook’s tenure as Director of Advisory Services. It has been reprinted in Appendix E, Secrets Of A Cyber Security Architect, Brook Schoenfield, Auerbach, 2019. Permission granted for this use.”

Curated threat modeling content

A guide to threat modelling for developers

Jim Gumbley (who lives in the UK, you can tell by his spelling of Threat Modelling with two l’s) recently published this guide explaining threat modeling in simple steps towards developers. We really like Jim’s approach of not trying to create the perfect threat model, but to encourage teams to start simple and grow from there. Exactly our philosophy as well!

Another gem: “The killer application of Threat Modelling is promoting security understanding across the whole team. This is the first step to making security everyone’s responsibility.”

Read more online.

Threat modeling sessions at the summit

The Open Security Summit was a remote event this year. One of the benefits of remote presentations and working sessions is that they lend themselves to be used to present a number of tools without the tool vendors needing to travel halfway around the world. Steven took the initiative to contact a number of commercial and open source threat model tools and asked them to present their tool to the OSS threat model audience so we could get an idea on what tools are capable of these days.

It is clear that the tools have been improved significantly in the last couple of years. Each tool has a specific set of functionalities that cannot be found in other tools. It was great to see the developers of the tools explain these to us:

Tip: DREAD is dead

We regularly get asked about DREAD as a scoring mechanism for risks related to findings as part of threat modeling. The problem with DREAD (introduced by Microsoft) and its scoring mechanism based on Damage, Reproducibility, Exploitability, Affected Users and Discoverability is that it is quite subjective and hard to scale over different teams.

Microsoft stepped away from using DREAD and now uses the concept of Bug Bars or the Microsoft Security Response Center Security Bulletin Severity Rating System. Other options are to use CVSS or OWASP Risk Rating. So leave the dead alone and choose a risk scoring mechanism that fits your needs!

Upcoming public Toreon trainings

  • Online: Whiteboard Hacking a.k.a. Hands-on Threat Modeling hosted by Toreon  (2 x 4h on 22-23 September, 2020)
  • Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling hosted by Cqure, Netherlands (6-7 Oct, 2020)
  • Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling at Black Hat Europe, London (9-10 November, 2020)

Want to learn more about Threat Modeling training? Contact us, so we can organize one in your neck of the woods.

We aim to make this a community-driven newsletter and welcome your input or feedback. If you have content or pointers for the next edition, please share them with us.

Kind regards,
Sebastien Deleersnyder
CEO, Toreon

Want to know more?

Start typing and press Enter to search

Shopping Cart