TMI newsletter 18 – Threat Modeling can be considered as fun as cooking a good PASTA meal Part 2


Hi there.

Welcome to the latest edition of Threat Modeling Insider.

Again, this edition is loaded with compelling content.

Our complete TMI Line-up:

  • “Threat Modeling can be considered as fun as cooking a good PASTA meal.” – Risk-Centric Threat Modeling, an interview with Marco Mirko Morana, Executive Director and Head of Security Architecture at JP Morgan Chase Co. (Part 2 of 2)
  • Curated Content. Open Security Summit session: “Threat modeling failure modes” by TM expert Izar Tarandach
  • Curated Content. Horoscope as a Service – Using MITRE ATT&CK for threat modeling, an example threat model by Jonathan Baker
  • Toreon Blog Post: Standard risk rating methodologies are good, customized risk rating methodologies are better, by Steven Wierckx.
  • Toreon Tip: “Persona Non Grata (PnG)”, a threat generation technique by Jane Cleland-Huang.

“Threat Modeling can be considered as fun as cooking a good PASTA meal.”

Risk-Centric Threat Modeling, an interview with Marco Mirko Morana, Executive Director and Head of Security Architecture at JP Morgan Chase Co. (Part 2 of 2)

Disclaimer: The views put forth in the article herein are the personal views of Marco Morana based upon his professional experience and do not necessarily reflect the views of the employer (JP Morgan Chase). The references in this article are to sources available in the public domain and are provided solely as informational references, without endorsement by the author. 

How do you recommend including threat modeling in an agile development practice? 

We cover threat modeling and Agile in the PASTA book, but this content is based upon discussions we had with developers executing threat modeling aligned with Agile sprint cycles several years ago. Tony and I, we were in discussions about wiring version two of the book, and Agile as well as SecDevOps are areas that we could update. I believe there are still challenges in applying threat modeling to Agile and this is due to the fact that…

the design in Agile evolves through Sprints, from prototyping and interactive deployment and from POC to DEV, TEST and PROD. Ideally threat models need to be executed with an initial baseline of NFRs (Non-Functional Requirements) and focus on delta changes on that baseline. Threat models can be updated during Agile sprints to identify new attack vectors and to drive new tests for which specific functionalities can be designed and built. As issues are identified, such as new vulnerabilities or design flaws, these can be remediated during build cycles similarly to static code scans that are done at each build. In SecDevOps, semi-automated threat modeling with real time extraction of DFDs from components deployed in DEV and TEST can be updated and light threat models can assert mitigations with questionnaires to help developers identifying unmitigated threats for the tech stack in scope, ahead of production deployments.

In what situation is threat modeling most effective? In what situation is threat modeling least effective`? 

Depending on the scope of threat modeling (some organizations have tens of thousands of apps whereas have few apps and products) and how threat modeling is done (such as lightweight assessments, self-assessments, checklist based architecture reviews of threat mitigations, semi-automated with incorporation of tools to derive threat libraries and fully manual intensive), there is first and foremost a need to limit the scope of threat modeling on what is achievable to execute threat modeling against. An effective approach is to apply threat modeling only to business-critical applications services/products that bear most of the risk for the organization. Of these are applications that have already had security tests in production but because of high-risk functionalities or storing high volumes of PIIs, customers are at cyber-risk of being attacked. For these high-risk applications, threat modeling is worth the investments and can be very effective to identify high severity design flaws and vulnerabilities that other security vulnerability assessments (e.g., pen testing) are unlikely to identify for example business logic flaws and abuse cases. Based upon my experience in executing threat modeling for large financial organizations, we have seen threat modeling to be very effective when combined with code reviews to derive attack vectors that can expose highly critical vulnerabilities. It comes  even to a surprise to the CISOs that some of these critical vulnerabilities (e.g., showstoppers for production deployment) were not identified by other assessments. Threat modeling can drive attack-driven tests and use and abuse cases that are typically not part of the traditional vulnerability assessments. Threat modeling is definitely effective for applications that were not threat modeled before using a risk-based threat modeling methodology. These can be both new and existing applications. We should be honest anyway that there are costs to bear, as threat modeling is time intensive and has costs associated to humans executing the process. Threat modeling at best is still semi-automated, even with the use of tools, and still requires skilled and trained threat modeling analysts and cannot be completely replaced by tools, because it requires humans to understand the context of the application and the architecture and the risks. In essence, threat modeling is least effective when it is not limited in scope, and when it is just a light threat model applied to all applications in order to scale. What I mean with a light threat model is a developer self-assessment threat assessment checklist based upon the code snippet and the tech stack that is being deployed. It has still value, but it is unlikely to identify critical risk vulnerabilities.

Compared to the overall time spent on developing a new system, how much time should be spent on threat modeling? 

Time depends on the threat modeling process that is followed and the scope of the assessment: for new applications you typically execute what is referred to as a baseline threat model. This baseline threat model requires more time than a threat model on project changes of an existing application. Also, the time spent depends on the threat modeling process that is used. PASTA is a seven stages process with several activities that might be time consuming to execute. Time also depends on the readiness of the artefacts being available as input to execute the activities in each stage. In my previous experience, a baseline threat model on new applications can take 2 to 3 weeks for issuing an initial report with gaps and risks and another 2 weeks to validate these findings by the security testing team (if the attack simulation testing exercise is also included in the process). An in-project threat model is much more limited in scope and might take just 1 week if it is standalone to identify design flaws without testing by the attack simulation team. For example, PASTA at stage 1 depends on the availability of standards and policies, data classification docs, functional and business requirement docs to derive security requirements and perform a Business Impact Analysis (BIA). At Stage 2, a PASTA assessment of the technical scope is performed. This might be a very wide scope; hence it is important to define the technical scope of all components of the architecture and their dependencies and the technical stack. At stage 3, the application is decomposed. The threat analyst heavily relies on availability of architecture diagrams and sequence diagrams, logical and network diagrams to derive/distill DFDs, use cases, access controls, entry points, trust boundaries, data, and user interfaces. PASTA stage 4 is threat analysis, which depends on inputs like relevant threat Intel reports, security incidents and fraud reports to generate a list of threat agents and attack scenarios. PASTA stage 5 is vulnerability analysis; including mapping of threats to vulnerabilities identified, of which the risk is determined as the exposure to specific attacks. This might be very time consuming, as well as the stage 6. That is attack modeling, to identify attack paths and vectors, to derive the attack surface and attack trees for determining the attack scenarios that have likelihood to be followed by attackers. PASTA stage 7 is risk analysis and impact analysis. This is a stage that seeks to identify gaps in controls and countermeasures and identify risk mitigation strategies that minimize the risks and business impact. This requires involvement of cyber risk management and the business and assumes as a requirement that all the other PASTA stages are completed. PASTA Stage 7 can be time consuming in terms to reach an agreement on how to mitigate risk as acceptable for the business.

What is the relation between threat modeling and other security activities as part of a secure development lifecycle? 

Threat modeling can be executed for new applications during design and before code is built, before the code is scanned and before apps are deployed in production, operated, and monitored. Since it is executed at early stages and ahead of SecDevOps, the threat model artefacts, such as attack driven tests, can be executed by other security activities that can augment the test cases for manual pen testers to determine the risk of the exploits for design gaps and the threats identified. The threats that are kept open or unmitigated also depend on the mapping to vulnerabilities that are identified by SAST, DAST or IAST. In some cases, when RASP is being deployed, threat models can also feed into RASP for applications that are operational. RASP can implement blocking rules to block or detect attack vectors for new threats and new vulnerabilities (such as zero-days). RASP can also feed the threat models (instead of being fed to) by detecting attack vectors whose countermeasures should be deployed, while the RASP rules block these attacks or exploit of un-mitigated vulnerabilities.


How can we make threat modeling more effective for a development team? 

Well, we should ask developers (laughs). Working closely with developers, I can say that threat modeling can help developers to secure the code of their applications more effectively. For example, each code snippet might have threats generated that are code specific and whose correct implementation can be self-tested on security by developers. We can try to make it more effective for them, by automating these attack-driven tests, based upon the technology stack and the code libraries framework.

For people who are new to it, how do you recommend that they can start with threat modeling? 

It is important to explain clearly to whom you would like to recommend or say “sell” threat modeling. The concept of risk mitigation relates to what the business impact of non-mitigated risks means to the business. Some of this impact can be attributed to design flaws in the architecture which the threat model can identify, where other assessments are less likely to identify those, especially if they are automated and without being context aware, like threat modeling is supposed to be. It is important to explain the risks to whom you are recommending threat modeling in terms of articulating the difference between threats, attacks, vulnerabilities, and risk to the data assets, so the initial conversation should be about potential business impacts, including regulatory impacts as well as operational impact being tangible (e.g., data loss, denial of service, fines) or intangible (e.g., reputational damage). It is important to explain the importance of knowing what the threats and cyber-attacks potentially are, that are targeting your business, unless there are any threat agents already targeting your applications/products. Threat modeling makes sure these threats are visible to your organization, allowing to make informed risk decisions on how to mitigate them. I do not recommend starting to threat model by just deploying threat modeling tools for developers to use, but by having experienced threat modeling analysts to perform a full-blown threat modeling exercise and to attack a simulation to identity gaps in countermeasures that need to be remediated. This is the start. After that, companies should learn how to create a threat modeling practice within their SDLC, investing in training a workforce and in threat modeling tools.

What kind of tools would you recommend doing threat modeling? 

Security is always a matter of using three basic ingredients. For a successful outcome of a security product, you need tools, people, and processes. I am not much biased by a specific type of threat modeling tool as long this tool can help to execute a threat modeling process consistently and accurately. From process point of view, of course I am biased on risk- and attack-centric processes like PASTA and using a tool can help execute one or more stages and activities that are part of the methodology. Today, security tool vendors have evolved and there is no reason to just use VISIO as a TM tool to generate DFDs and apply threats to architecture components. There are tools that have threat libraries and apply these threat libraries automatically depending on the technology scope of the tech stack and the exposure of the components to these threats. We have tools that can map threats to vulnerabilities identified by other tools and can integrate with most commonly used issue tracking tools to open and close issues identified in threat modeling and track all threat modeling flows to be executed, including attack/testing workflows. I am big fan of threat modeling tools that integrate threat modeling with ways to detect and protect from threats by automatically generating detection and protection rules for these threats using RASP technology as well as threat modeling tools that can extract architecture diagrams from deployed components of the architecture using application discovery or application configuration by deriving architecture by looking at Terraform IaaC before these components are deployed.

How can we scale up threat modeling in large organizations? 

This is an interesting and also challenging question, because it points to a limitation of threat modeling as a process from being light and semi-automated to in-depth fully manual, which is human-intensive and not scalable. If we go fully manual, to be honest, even assuming your organization has unlimited budget for threat modeling, which would be nice (laughs), it would still be non-feasible to hire as many threat modeling analysts as possible for all the applications that need to be threat modeled in a large organization. There are ways to address these limitations. Simplify the threat modeling process that is being executed to derive a light version, that is a semi-automated version of the process, that can be executed at larger scale, scoped for changes of existing applications, where the bulk of the baseline threat models is done on most critical business and mission critical applications. Another way to scale threat modeling is by applying threat modeling to approved architecture design patterns used by developers upon which new applications can be deployed and build using approved technology stacks, software libraries, platforms services and products. For example, in the case of components such as APIs, leverage threat models to derive secure design patterns and blueprints. Then applications can be designed using these design patterns that already have threat models associated and a list of countermeasures that need to be deployed for the targeted architecture. By adopting a design or architecture design blueprint threat model-driven approach, threat models can be re-used across several applications that adopt the same design patterns and blueprints. Some of these threat models might also already consider the application use cases and exposure of either external or internal threats as well as the threats to the data that also might include impact for non-compliance regulatory and/or privacy requirements (e.g., PCI-DSS, GDPR).

Curated threat modeling content

Open Security Summit session: “Threat modeling failure modes”

In this talk Threat Modeling expert Izar Tarandach gives us a general ‘lessons learned’ session. After all, the best way to improve is to learn from our mistakes, right? Check it out!

Threat model example: Horoscope as a Service – using MITRE ATT&CK for threat modeling.

When threat modeling, we heavily recommend basing your analysis on a threat-informed defense framework, such as the open-source and community-driven MITRE ATT&CK, and to then translate the results into the language of the business (possible scenarios and costs). This is the best way to change things in your organization. Jonathan Baker shows us how this can be done with a fictitious example on Github.


Toreon blog post: Adapting risk calculation to your needs

In this blog post, our AppSec lead Steven is showing us why and how customized risk rating methodologies are always better than standard ones. Yes, always. You can read all about it here.

Toreon tip

Persona Non Grata (PnG) is a threat generation technique by Jane Cleland-Huang. She suggested that we can describe potential threat actors as archetypical users of a system that may have mischievous or even explicitly malicious end-goals. By visualizing and describing these personas, the real-world motivations and possible misuse cases of these unwelcome individuals could be developed, which would then help illuminate potential attack vectors and vulnerabilities. This is a valid technique to identify a potential threat (related to the persona) with a high degree of confidence.

We aim to make this a community-driven newsletter and welcome your input or feedback. If you have content or pointers for the next edition, please share them with us.

Kind regards,
Sebastien Deleersnyder
CTO, Toreon

Book a seat in our upcoming trainings

We also organize in-company training for groups as of 10 participants.

Start typing and press Enter to search

Shopping Cart