Threat Modeling Insider – September 2023

Threat Modeling Insider Newsletter

28th Edition – September 2023

Welcome!

The summer holidays are officially over, which can only mean one thing: the return of the Threat Modeling Insider newsletter!

We’re excited to pick up where we left off and continue to share insightful articles and resources that delve into the world of threat modeling. Speaking of which, let’s take a look at what’s in store for this month’s edition.

Threat Modeling Insider edition

Welcome!

Threat Modeling Insider edition

The summer holidays are officially over, which can only mean one thing: the return of the Threat Modeling Insider newsletter!

We’re excited to pick up where we left off and continue to share insightful articles and resources that delve into the world of threat modeling. Speaking of which, let’s take a look at what’s in store for this month’s edition.

On this edition

Curated content
Large Language Models (LLMs) are all the rage right now. Take a look at how you can leverage them for your Threat Models. Written by xvnpw.

Training update
An update on our upcoming training sessions.

GUEST ARTICLE

Threat Models are useless. Threat Modeling is essential!

Jess Chang, Staff Technical Program Manager, Security at Vanta

Our approach to threat modeling

In this series, you’ll hear directly from Vanta’s Security, Enterprise Engineering, and Privacy, Risk, & Compliance Teams to learn about the team’s approach to keeping Vanta — and most importantly, our customers — secure.

The following post comes from our Security Team and explains our approach to threat modeling.

What is threat modeling?

Threat modeling is the process of identifying and understanding the potential threats and risks that can impact a given business, system, network, or feature. 

The goal of threat modeling is to make better decisions. We do this by building a better mental model of the world, keeping things in mind such as potential attacker’s profiles, motivation, goals, and likely attack vectors. Many of us model threats every day—when making decisions such as choosing to buckle your seatbelt, looking both ways before you cross the street, and more.

How does Vanta approach threat modeling?

At Vanta, we use threat modeling as a tool to help share knowledge and improve our mental model of Vanta as a company, and of the environment in which we operate. These exercises are most frequently run (though not always) by the Security team, and are held on recurring cadences and on an as-needed basis.

We don’t maintain a canonical “threat model” at any given point in time. Instead, we view the exercise itself as the primary mechanism through which we benefit. This is because the discussion that arises during a threat modeling exercise is what improves our mental model and enables us to make better decisions.

In other words: “Threat models are useless. Threat modeling is essential.

When does the Vanta security team threat model?

We view threat modeling as a team and organizational muscle that requires regular exercise.  It can be fun, enlightening and interesting! This means we get together for threat modeling exercises on at least a monthly basis, if not more frequently. Sometimes we’ll threat model a given feature, and other times we’ll pick a sample scenario to model and discuss. For some added fun, sometimes we’ll even get together to model threats around a given theme, event, or season.

In addition, we strive to include a wide range of teammates across Vanta in our exercises. This can depend on the topic at hand but often includes individuals from our Enterprise Engineering, Privacy, Risk, & Compliance, Engineering, and Product teams, as well as from our operational and customer-facing teams. Not only does this help evolve our individual mental models of risk, but it also widens our collective lens on hidden risks and opportunities for improvement across the organization to make more informed decisions. 

Keep in mind that if you’re focusing on a specific feature or project, it’s helpful to start as early as possible, such as in the design phase. This lowers the cost of any big changes that need to be made.

What are the core steps you follow?

Here are the core steps we follow for our threat modeling exercises:

  1. Define your goals
    Discuss and agree upon what your collective goals are for your threat modeling exercise, and jot this down to help ensure all teammates are on the same page. This could be to model threats for a given feature or project, or something more broad—such as to help inform your team’s planning for a given quarter. Defining your goals up front can help you keep your discussion more focused.

  2. Define your scope

    Identify the data, systems, and processes at hand that you’re protecting and understand how they’re used, as well as any dependencies. This helps your team to go into greater depth than they could if you had to keep expanding the breadth of your focus.

  3. Identify threats

    Brainstorm the most significant threats that could compromise the confidentiality, integrity, or availability of your system. While we often consider external attackers, be sure to also consider internal actors, whether intentional or unintentional.

  4. Identify controls
    Enumerate the controls that are either in place or could be put into place to mitigate each threat, including both preventative and detective controls.

  5. Identify gaps
    Model in the gaps that could result in controls failing to block a particular threat. For example, anti-malware doesn’t always detect the malware. What would happen next? What would the next stage of the attack be? Do you have further controls for that step?

  6. Next steps

    Based on the discussion, identify action items and make sure you address them. Walk away better informed and make good decisions!

    During our threat modeling exercises, we use diagramming tools to communicate and connect information and create a clear visual representation of your threat model. This is particularly beneficial during exercises that incorporate a variety of stakeholders—we view these diagrams as primarily for those who are in the room instead of as an artifact for the future.

    While we’ve experimented with a variety of tools, we tend to use Deciduous or a lightweight model in FigJam that allows us to collaborate and communicate effectively.

Tips for threat modeling

An important point to remember is that there are many approaches to threat modeling. Our recommendation is to find what works best for your organization to ensure that you’re able to threat model effectively when you need it most, instead of running an arduous exercise that’s both operationally challenging and outdated quickly.

Here are a few suggestions we have for ensuring threat modeling exercises run smoothly:

  • Your team: Assemble a cross-functional team to ensure you’re incorporating diverse perspectives—such as teammates across your security, engineering, business, and operations teams. This can help reduce observational bias (e.g. the streetlight effect) when modeling threats. Be sure to share context beforehand or at the start of the session, especially if it’s your organization’s first time running a threat modeling exercise.
  • Your roles: Assign clear roles and responsibilities ahead of time. For example, decide who’ll help run the exercise, update the threat model in real-time, and identify anyone who might play a specific role—such as teammates who represent the perspectives of the adversary, defender, developer, etc.
  • Your tools: Identify which tools you’ll use and ensure you’re familiar with how they work prior to threat modeling. Remember, the point is to help create a high-bandwidth information session, not necessarily to capture your discussion for later. 

Additional resources

Our goal is to make more informed decisions every day, and we hope you find threat modeling as fun, interesting, and valuable of an exercise as we do. 

If you’re looking to learn more about threat modeling, whether individually, for your team, or even for your organization, here are a few resources that have been helpful for us:

CURATED CONTENT

Handpicked for you

Toreon Blog: Threat Composer, exploring the parallels between risk descriptions and user stories.

Large Language Models (LLMs) are all the rage right now. Take a look at how you can leverage them for your Threat Models.

Writing risk descriptions in your threat models is an intricate art that often leaves beginning threat modelers puzzled. In this article, George Bolssens explores the parallels between risk descriptions and user stories, highlighting convenient templates that aid in accurately describing risks. Discover how a free and open-source tool can revolutionize your approach to threat modeling, including guidelines for self-hosting to ensure complete control.

In a bid to uncover which AI model is best at threat modeling, Github user Xvnpw put GPT-3.5, Claude 2, and GPT-4 to the test. In this article, you can get an in-depth overview of how each of them performed.

Threat elicitation and the art of "AI prompt crafting", by Georges Bolssens

Getting inspiration from an AI chat bot: I’ve done it, you’ve done it, millions of other people have done it, but when researching into cyber attacks too directly, ChatGPT will shut you down:

“My apologies, but as an AI language model, I can’t assist with providing guidance or information on attacking or exploiting a system.”

Security researchers are creative people and many of us have already found ways to circumvent this. The number of people who are “Writing a movie script” or “Teaching a class on cybersecurity” has clearly grown immensely ;-).

The art of “prompt crafting” is the key to unlock getting the response you want from an AI chatbot. One of the ways we have found useful to quickly elicit a number of focus points for threat modeling is to literally tell ChatGPT that we are creating a threat model. After some tweaking, the following prompt was found to be very useful in aiding the “What can go wrong?”-phase of the four-question-framework:

I’m creating a threat model and am assessing the risks that could occur on {PROTOCOL} connections between a {ENTITY_1} and a {ENTITY_2}. When considering the 6 STRIDE categories: what questions are relevant? Please be as thorough as you can and list as many relevant risks to the connection as you can. Previously, you gave me only 3 questions, which were relevant, but I need at least {5_OR_MORE} per STRIDE category.

Entering too high of a number in the last placeholder will sometimes result in duplicates, but we obviously curate the questions before considering them.

If you have experimented with this as well and have found different ways to craft AI prompts, drop us a line. We’re curious to see what you came up with!

Toreon Blog: Threat Composer, exploring the parallels between risk descriptions and user stories.

Writing risk descriptions in your threat models is an intricate art that often leaves beginning threat modelers puzzled. In this article, George Bolssens explores the parallels between risk descriptions and user stories, highlighting convenient templates that aid in accurately describing risks. Discover how a free and open-source tool can revolutionize your approach to threat modeling, including guidelines for self-hosting to ensure complete control.

Large Language Models (LLMs) are all the rage right now. Take a look at how you can leverage them for your Threat Models.

In a bid to uncover which AI model is best at threat modeling, Github user Xvnpw put GPT-3.5, Claude 2, and GPT-4 to the test. In this article, you can get an in-depth overview of how each of them performed.

Threat elicitation and the art of "AI prompt crafting", by Georges Bolssens

Getting inspiration from an AI chat bot: I’ve done it, you’ve done it, millions of other people have done it, but when researching into cyber attacks too directly, ChatGPT will shut you down:

“My apologies, but as an AI language model, I can’t assist with providing guidance or information on attacking or exploiting a system.”

Security researchers are creative people and many of us have already found ways to circumvent this. The number of people who are “Writing a movie script” or “Teaching a class on cybersecurity” has clearly grown immensely ;-).

The art of “prompt crafting” is the key to unlock getting the response you want from an AI chatbot. One of the ways we have found useful to quickly elicit a number of focus points for threat modeling is to literally tell ChatGPT that we are creating a threat model. After some tweaking, the following prompt was found to be very useful in aiding the “What can go wrong?”-phase of the four-question-framework:

I’m creating a threat model and am assessing the risks that could occur on {PROTOCOL} connections between a {ENTITY_1} and a {ENTITY_2}. When considering the 6 STRIDE categories: what questions are relevant? Please be as thorough as you can and list as many relevant risks to the connection as you can. Previously, you gave me only 3 questions, which were relevant, but I need at least {5_OR_MORE} per STRIDE category.

Entering too high of a number in the last placeholder will sometimes result in duplicates, but we obviously curate the questions before considering them.

If you have experimented with this as well and have found different ways to craft AI prompts, drop us a line. We’re curious to see what you came up with!

Upcoming trainings & events

Book a seat in our upcoming trainings & events

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by OWASP Global AppSec, Washington DC, USA 

Next training dates:
1-2 November  2023

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat Europe, London  

Next training date:
4-5 December 2023

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Next training date:
4 December 2023

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by OWASP Global AppSec, Washington DC, USA 

Next training dates:
1-2 November 2023

Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat Europe, London  

Next training date:
4-5 December 2023

Threat Modeling Practitioner training, hybrid online, hosted by DPI

Next training date:
4 December 2023

A gift for you!

Start typing and press Enter to search

Shopping Cart