Threat Modeling Insider – January 2026

Threat Modeling Insider Newsletter

50th Edition – January 2026

Welcome!

Welcome to this month’s edition of Threat Modeling Insider! In this edition, Sandesh Mysore Anand tells us more about his view on Threat modeling and Security Design Reviews.

Next, on the Toreon Blog, Sebastien Deleersnyder shares an interesting view on the “Attacker Mindset” and how a defensive approach has more advantages for making Threat Modeling stick. 

There’s plenty of other actionable insight ahead, so settle in and let’s get started!

Threat Modeling Insider edition

Welcome!

Welcome to this month’s edition of Threat Modeling Insider! In this edition, Sandesh Mysore Anand tells us more about his view on Threat modeling and Security Design Reviews.

Next, on the Toreon Blog, Sebastien Deleersnyder shares an interesting view on the “Attacker Mindset” and how a defensive approach has more advantages for making Threat Modeling stick. 

There’s plenty of other actionable insight ahead, so settle in and let’s get started!

On this edition

Tips & tricks
The OWASP AI Exchange Project

Training update
An update on our training sessions.

Guest Article

Threat Modeling (TM) vs. Security Design Reviews (SDR)

If you’ve spent any time in AppSec, you’ve probably heard Threat Modeling (TM) and Security Design Reviews (SDR) used interchangeably. I know I have. Sometimes they’re treated as synonyms. Sometimes SDRs are described as “lightweight threat models.” Sometimes threat modeling is positioned as an advanced form of design review.

The definitions aren't wrong. The framing is.

SDR is, in many ways, a specialized form of threat modeling. Both activities analyze representations of a system and surface security concerns. But treating them as interchangeable causes real problems. Teams either overload engineers with open-ended security debates when a focused review would suffice, or they reduce deep risk analysis into shallow checklists when the situation actually calls for broader threat exploration.

Understanding where SDR fits within the broader TM umbrella, and more importantly, when to use which approach, is what separates AppSec programs that scale from those that burn out their teams or miss critical risks.

The confusion usually starts because TM and SDR share the same intellectual roots.

The Common Starting Point

Screenshot 2026 01 20 at 10.32.01

Both TM and SDR originate from the same foundational idea, best captured by The Threat Modeling Manifesto:

Analyzing representations of a system to highlight concerns about security and privacy characteristics.

That definition packs three important ideas. “Representations of a system” means diagrams, designs, code, data flows, and mental models. “Concerns about security and privacy” means we’re talking about risk, not any software bugs. And “highlight” means surface, not necessarily solve.

This shared origin is why teams conflate TM and SDR. But once you move past the definition and into practice, there’s one question that separates them cleanly:

What decision is this activity trying to enable, and for whom?

That’s it. That question alone will save you hours of semantic debates.

When They Happen in the SDLC

Threat Modeling is not tied to a single SDLC phase. You can (and should) use a threat model during early design, mid-build when implementation details emerge, before a penetration test, after an incident, or while reviewing a legacy system no one fully understands anymore. We all have these, let’s be honest.

In fact, good pen testers almost always build an implicit threat model before testing. They ask: What’s exposed? Where are trust boundaries? What would I attack first?

TM exists to help security thinkers reason about attacker behavior and system exposure. Timing is flexible because understanding risk is always valuable.

SDRs are different. They are design-stage controls by definition. Their job is to influence what gets built, not just how it might be attacked later. An SDR typically happens when a new system or feature is proposed, when a material architectural change is introduced, or when new data types, integrations, or trust boundaries are added.

If you’re reviewing code changes only, you’re already late. SDRs work on the principle that there’s still time to change direction.

I think about the difference this way: TM helps security thinkers reason about attackers. SDRs help builders make better decisions before those decisions harden into code.

Inputs

Threat Modeling: “All the information you have”

There is no natural input for TM. Depending on scope and timing, inputs might include interviews with engineers, architecture diagrams (current or outdated, usually outdated :P), source code, API specs, operational knowledge, even tribal memory.

TM adapts to ambiguity. In fact, ambiguity is often the point. It reveals where understanding is thin. That flexibility is great, but it’s also why TM is hard to standardize and scale. You can’t templatize “talk to people and figure out what’s going on.”

SDRs: What teams already produce

SDRs intentionally anchor to existing artifacts: PRDs, design docs, architecture diagrams, tech specs, and the current codebase when relevant.

SDRs work because they ride the same rails as engineering workflows. They don’t ask teams to produce new abstractions just for security. They meet teams where they already are. 

This is a big deal. If you’ve ever tried to get engineering teams to draw Data Flow Diagrams specifically for security reviews, you know the adoption pain. SDRs sidestep that entirely.

Outputs

Threat Modeling: Understanding, not instructions

TM outputs vary wildly depending on methodology and scope. You might end up with lists of threats, attack trees, risk narratives, annotated diagrams, or abuse cases. Using STRIDE yields different artifacts than PASTA. A lightweight whiteboard session looks nothing like a formal workshop.

And that’s okay, because the goal is insight. Threat Modeling answers questions like: What could realistically go wrong here? Which attack paths matter most? Where are we exposed in ways we didn’t expect?

It sharpens security intuition. It does not necessarily tell engineers what to implement next. This is where many (not all) threat modeling initiatives fall apart. 

The engineers walk away saying, “Okay, but what do I do now?”

SDR Outputs: Clear, actionable security requirements

SDRs produce something much more constrained and operational. Things like: authentication must be enforced at service boundaries, not the gateway. Secrets must be sourced from the platform secret manager. PII must be encrypted at rest using approved key management. This service must emit audit logs for these events.

They’re not expressed as threats, but as explicit design decisions or security requirements teams should implement. A good SDR output is unambiguous, clarifies the why, maps directly to implementation, and can be validated later.

Automation: SDR scales easily (thanks, AI).

Why Threat Modeling is hard to automate

TM struggles with automation for structural reasons. There’s no universal definition of “done.” Outputs vary by methodology. Diagramming is central and subjective. Human judgment is the core value.

You can assist Threat Modeling with tools, but full automation misses the point. The exercise is about thinking, not just the conclusion. 

I know this sounds like something a Luddite would say (Far from it, I run an AI company), but I genuinely believe TM’s value comes from the conversation, not the artifact.

Why SDR automation is now feasible

SDRs are input-constrained, output-structured, and decision-oriented. This makes them a strong fit for LLMs. Modern systems can read design artifacts, understand architectural intent, apply security policy contextually, and produce consistent, explainable requirements. Here, automation doesn’t remove judgment. It enables better decision-making at scale.

Mature Programs Use Both

Here’s the pattern I’ve seen mature AppSec orgs move towards: 100% SDR coverage for all changes (every team, every time, without heroics) combined with targeted Threat Modeling for high-risk systems, novel architectures, major trust boundary shifts, and incident-driven retrospectives.

In many cases, SDRs act as the filter: if an SDR flags an elevated or unclear risk, escalate to threat modeling. If not, proceed with confidence.

This flips the common anti-pattern on its head. Instead of threat modeling everything (and burning out your security team), you use SDRs to decide when deep modeling is warranted.

A Practical Decision Matrix

Depending on your company’s workflow, a good way to think about this is: run an SDR for every meaningful change, but TM happens only when certain criteria are met. 

In other words, one output of an SDR is to determine whether a deeper threat model is even necessary. SDRs become the triage mechanism that prevents your security team from drowning in full-blown Threat Modeling sessions for routine work.

Here’s an example of how such a framework might look, though the specific triggers will vary based on your organization’s risk appetite, team capacity, and the nature of what you’re building:

Situation

Routine feature change

New integration or data type

Design ambiguity or disagreement

Novel or high-risk architecture

Post-incident or pre-pen test

What to do

Run an SDR

SDR first

Escalate to TM

TM explicitly

TM

Why

Validate decisions efficiently

Establish requirements early

Shared understanding needed

Unknown attacker paths dominate

Maximize learning

How This Works in Practice

Let me give you a concrete example. A SaaS team introduces a new feature syncing customer data to a third-party analytics service. An automated SDR is triggered using the existing PRD and design doc.

Most controls look standard, but assumptions around cross-tenant isolation are unclear and need further review. Rather than forcing premature requirements (or, worse, handwaving the risk away), the AppSec team escalates to a focused threat-modeling session. 

That session surfaces a lateral movement path no one had considered. With this understanding, the team returns to the SDR, refines requirements, and proceeds with confidence.

Each practice does exactly the job it’s meant to do. No overlap, no gaps.

Conclusion

If you run an AppSec program, aim for 100% SDR coverage for meaningful changes and targeted threat modeling for high-risk or ambiguous scenarios.

And remember: Threat Modeling is stage-agnostic because it optimizes risk understanding. Security Design Reviews are design-bound because they optimize decision-making. 

SDR lives within the TM family, but conflating them leads to processes that either do too much or too little.

Once you internalize that distinction, everything else (timing, inputs, outputs, automation, scale) falls into place.

Transform your team into AI Strategists today:

Handpicked for you

Toreon Blog: Why Thinking Like a Defender Beats the Attacker Mindset

“Think like an attacker.” It’s our industry’s favorite mantra, but for most engineering teams, it’s a setup for failure. It expects developers – who spend their days perfecting “happy flows” – to suddenly pivot into a destructive mindset that goes entirely against their nature.

This creates a bottleneck for organizations attempting to scale threat modeling, as engineers frequently find themselves paralyzed by “creator-blindness”—the natural cognitive inability to see flaws in a system they have specifically designed to succeed. To overcome this paralysis, many teams turn to GenAI for rapid answers, only to be caught in a validation gap where they lack the specialized security expertise required to distinguish between a helpful insight and a dangerous hallucination.

The truth is, you don’t need more “attackers” on your payroll. You need to lean into the Defender’s Advantage. Here’s why shifting the focus back to your own domain is the better way to make threat modeling stick.



Curated Content

STRIDE-GPT As an MCP Server: A Composable Tool That AI Agents Can Use Autonomously

STRIDE-GPT has been released as an MCP server, enabling AI agents to autonomously perform full threat modeling on a codebase or architecture while fitting seamlessly into agent-based workflows. It delivers STRIDE-based threat analysis, risk scoring, mitigations, and executive-ready reports, all composable with other MCP tools like GitHub and Terraform.

Key Takeaways are:

  • STRIDE-GPT turns threat modeling into an autonomous, composable AI workflow, integrating code, infrastructure, and security analysis in a single session

  • It provides comprehensive coverage, including all six STRIDE categories, DREAD risk scoring, attack trees, and OWASP LLM Top 10 (2025) AI/ML threats

  • The tool is free, open source, and flexible—usable interactively or fully autonomously—while supporting modern domains beyond web apps, such as cloud, APIs, IoT, and mobile

Global Risks Report 2026 - World Economic Forum

The Global Risks Report 2026 frames “Uncertainty” as the defining condition of a new Age of Competition, marked by weakening cooperation, rising multipolar rivalry, and eroding trust. The outlook is starkly pessimistic, with escalating geoeconomic conflict, accelerating AI risks, and long-term environmental threats reshaping how global and systemic risks must be understood.

Key Takeaways are:

  • Geoeconomic confrontation is now the top short-term global risk, pushing threat modeling to include infrastructure sabotage, supply-chain weaponization, and cyber-physical attacks on critical systems

  • AI has shifted from a tool-level risk to a systemic one, with concerns ranging from misinformation and deepfakes to adversarial data poisoning, automated escalation, and long-term governance failures

  • Cryptographic complacency, information warfare, and fragmented supply chains demand forward-looking models that account for quantum “harvest now, decrypt later” threats and the erosion of digital trust and strategic dependencies

STRIDE-GPT As an MCP Server: A Composable Tool That AI Agents Can Use Autonomously

STRIDE-GPT has been released as an MCP server, enabling AI agents to autonomously perform full threat modeling on a codebase or architecture while fitting seamlessly into agent-based workflows. It delivers STRIDE-based threat analysis, risk scoring, mitigations, and executive-ready reports, all composable with other MCP tools like GitHub and Terraform.

Key Takeaways are:

  • STRIDE-GPT turns threat modeling into an autonomous, composable AI workflow, integrating code, infrastructure, and security analysis in a single session

  • It provides comprehensive coverage, including all six STRIDE categories, DREAD risk scoring, attack trees, and OWASP LLM Top 10 (2025) AI/ML threats

  • The tool is free, open source, and flexible—usable interactively or fully autonomously—while supporting modern domains beyond web apps, such as cloud, APIs, IoT, and mobile

Global Risks Report 2026 - World Economic Forum​

The Global Risks Report 2026 frames “Uncertainty” as the defining condition of a new Age of Competition, marked by weakening cooperation, rising multipolar rivalry, and eroding trust. The outlook is starkly pessimistic, with escalating geoeconomic conflict, accelerating AI risks, and long-term environmental threats reshaping how global and systemic risks must be understood.

Key Takeaways are:

  • Geoeconomic confrontation is now the top short-term global risk, pushing threat modeling to include infrastructure sabotage, supply-chain weaponization, and cyber-physical attacks on critical systems

  • AI has shifted from a tool-level risk to a systemic one, with concerns ranging from misinformation and deepfakes to adversarial data poisoning, automated escalation, and long-term governance failures

  • Cryptographic complacency, information warfare, and fragmented supply chains demand forward-looking models that account for quantum “harvest now, decrypt later” threats and the erosion of digital trust and strategic dependencies

TIPS & TRICKS

The OWASP AI Exchange Project

The OWASP AI Exchange project is an invaluable resource for those building/protecting AI systems. Check out their brand new Overview page. As a threat modeler, you will find a lot of information in the “Controls overview” section.

 

Our trainings & events for 2026

Book a seat in our upcoming trainings & events

2-Day Training: AI Threat Modeling Next Generation: From Whiteboard Hacking to Hands-on Prompting, in-person, London OWASP Training Days, UK

25-26 Feb 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

March 2026

Threat Modeling Practitioner trainin, hybrid online, hosted by DPI, US Cohort

April 2026

2-Day Training: AI Threat Modeling Next Generation: From Whiteboard Hacking to Hands-on Prompting, in-person, London OWASP Training Days, UK

25-26 Feb 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

April 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, US Cohort

March 2026

Advanced Whiteboard Hacking – aka Hands-on Threat Modeling, NorthSec Training, Montreal

10-11 May 2026

3-Day training: AI Whiteboard hacking aka hands-on Threat Modeling Training, in-person, OWASP Global AppSec EU, Vienna Austria

22-24 June 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

June 2026

Advanced Whiteboard Hacking – aka Hands-on Threat Modeling, NorthSec Training, Montreal

10-11 May 2026

3-Day Training: AI-Whiteboard Hacking aka Hands-on Threat Modeling Trainig, in-person, OWASP Global AppSec EU, Vienna Austria

22-24 June 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

June 2026

Start typing and press Enter to search

Shopping Cart