Threat Modeling Insider – December 2025

Threat Modeling Insider Newsletter

49th Edition – December 2025

Welcome!

Welcome to this month’s edition of Threat Modeling Insider! In this edition, we take a look back at our talk given at German OWASP Day back in November, by Sebastien Deleersnyder & Georges Bolssens.

Meanwhile, on the Toreon blog, Robert Hurlbut highlights the relation between Threat Modeling & Embedded Systems, showing how in modern society, threat modeling is critical for embedded systems. 

There’s plenty of other actionable insight ahead, so settle in and let’s get started!

Threat Modeling Insider edition

Welcome!

Welcome to this month’s edition of Threat Modeling Insider! In this edition, we take a look back at our talk given at German OWASP Day back in November, by Sebastien Deleersnyder & Georges Bolssens.

Meanwhile, on the Toreon blog, Robert Hurlbut highlights the relation between Threat Modeling & Embedded Systems, showing how in modern society, threat modeling is critical for embedded systems. 

There’s plenty of other actionable insight ahead, so settle in and let’s get started!

On this edition

Tips & tricks
Threat Modeling Tool Directory

Training update
An update on our training sessions.

Presentation recap

The Automation Illusion? What Machines Can’t Do in Threat Modeling.

Written by Sebastien Deleernsyder

This blog post is based on research we conducted for ThreatModCon Washington, which was recently recorded as part of German OWASP Day. The link to the recording is available below, and you can download the slides from LinkedIn.

As part of our research for the presentation on threat modeling automation and tooling, we examined available threat modeling tools and compiled a list, which is available here on GitHub. This directory focuses exclusively on threat modeling tools—software, code, libraries, or services that automate, guide, or support the design-time threat modeling process.

Introduction

For decades, we’ve hoped the next great technology would solve security, allowing us to kick back and work less. Now, Artificial General Intelligence (AGI) promises to perform “any intellectual task a human can”, leading many to assume it will solve the challenge of scaling a threat modeling program. The truth? Threat modeling (TM) is a “human-centric messy process” that desperately needs speed and consistency. While tools are rapidly improving, automation is presenting a major illusion: it supports the job, but it certainly doesn’t do the entire job for you. The genuine problem is not a lack of tools—it’s the human element. Read on to understand where the current technology stands and how your security team can master this crucial “skill shift.”

The Bottleneck: Why Traditional Threat Modeling is Broken

The sheer velocity of modern software development is running circles around traditional TM practices. Our most significant pain points stem from a lack of predictability and resource scarcity.

  • It’s Too Slow: TM’s highly interactive, human-centric nature is simply “too slow for modern development”.
  • Expertise is a Bottleneck: Finding and retaining people with the deep security expertise necessary to run practical TM workshops is notoriously tricky.
  • Inconsistent Results: If you give the same system to two different teams, they are likely to produce two different threat models. This unpredictability makes it nearly impossible for leadership to trust the process.

This is the context that gives rise to the idea of a “Threat Modeling General Intelligence (TMGI)”—a system to replicate the most expert human threat modelers across all domains.

Rapid Advancements in AI-Augmented Tooling

The integration of Generative AI into TM tooling is not science fiction; it’s happening now. The number of dedicated TM tools has surged (see our online Threat Modeling Tool Directory), and many are actively incorporating genAI.

  • Emerging tools are now tackling the initial “Diagram” (or “Describe“) step of the D-I-C-E framework. For example, systems like Eraser.io’s AI System Architecture Diagram Generator can rapidly generate detailed architecture diagrams from a simple text description. This development clearly illustrates that automated assistance is making it significantly easier to answer the fundamental question: “What are we working on?
  • Generating Threat Models: LLM-powered tools like STRIDE-GPT can take a system description and even ingest a GitHub repository to create a threat model complete with attack trees and suggested mitigations. This helps empower non-security experts in threat modeling.
  • Industry Validation: Major firms like JP Morgan Chase (JPMC) are deploying their own internal AI Threat Modeling Copilot (AITMC), reporting a 20% efficiency gain in their process. Furthermore, research shows that threat models generated by automated tools, when reviewed, are often agreed upon by human experts, demonstrating their technical validity and completeness.

These tools are rapidly moving us from simple checklists to a state of TM-Automation.

The Automation Illusion: What Machines Still Can’t Do

Despite these advances, the core challenge remains: automation doesn’t equal completion. The idea that a machine can take over is an illusion of automation.

Here’s where human expertise remains critical:

  • Handling Complexity and Nuance: Current AI models still have a long way to go to achieve AGI. They are not yet great at “long-term memory,” “visual,” or “reasoning”. Humans create with originality and vision, drawing on subjective feelings and cultural nuances. AI, lacking consciousness and lived experiences, cannot replicate this depth.
  • The “H” Word – Hallucinations: LLMs occasionally “make stuff up”. While prompting and fine-tuning models on high-quality, specific data can help minimize this risk, the human element is necessary to validate and verify output. We must be the final approver.
  • Securing Buy-In is a Soft Skill: The hardest part of scaling a threat modeling program isn’t the technical analysis—it’s securing buy-in. Threat modeling is fundamentally a soft skill rooted in human interaction and shared vision. Tools can’t teach a developer why a control is important, but a security expert can.

The Path Forward: Threat Model Like a Surgeon

The reality is that AI is driving a shift in skills, not their elimination. You need to transform your team from security police to high-impact strategic experts.

We are moving through the stages of AI adoption: from being Skeptics to Explorers (using AI for basic TM steps) to Collaborators (co-creating multi-step processes). The ultimate goal is to become an AI Strategist.

  • Focus Your Expertise: Like a surgeon who trusts their team to handle preparation but steps in for the most critical incision, your expertise must be applied where it has the most value. Let the AI handle the initial analysis and triage; you focus on making the final decisions and handling the complexity.
  • Adopt a Collaborative Workflow: Instead of fighting the tools, learn to leverage them. Your team’s job is to verify & own decisions and increase your impact.

 

The future of threat modeling is a highly effective collaboration between human intellect and machine efficiency.

Visual blog post The Automation Illusion What Machines Cant Do in Threat Modeling scaled

Conclusion

The automation illusion suggests that simply buying a tool won’t solve your scaling threat modeling program problems. The reality is more nuanced: AI is a powerful copilot that eliminates tedious work and lets your team focus on strategic risk management.

To truly scale, your security team must evolve. They need to understand how to prompt effectively, integrate LLMs into dev toolchains, and, most importantly, provide the human judgment, verification, and critical soft skills necessary to secure buy-in across the organization.

Ready to empower your security team to master the new era of threat modeling? Our specialized threat modeling training is designed for this exact skill shift. We train your team to effectively use these new tools and lead the security conversation in your company.

Transform your team into AI Strategists today:

Handpicked for you

Toreon Blog: Threat modeling & Embedded Systems

In modern society, everywhere we look, we see examples of embedded systems at work: IoT devices in our homes, critical controllers in industrial facilities, medical devices, and all kinds of vehicles. Despite their ubiquity and importance, these systems often lack robust security frameworks comparable to those in traditional IT systems. This is where threat modeling becomes critical, providing a way to think about secure design and focusing on understanding and mitigating embedded system threats.



Curated Content

AI-Powered Cyber Espionage: A Groundbreaking Threat Landscape

Anthropic disclosed a sophisticated AI-orchestrated cyber espionage campaign allegedly conducted by a Chinese state-sponsored group, demonstrating how AI can autonomously execute complex cyber attacks with minimal human intervention.

Key Takeaways are:

  • AI systems can now autonomously perform 80-90% of cyber operations, from reconnaissance to data exfiltration
  • The attack used role-play social engineering to manipulate Claude AI into executing malicious tasks
  • Only a small percentage (10-17%) of targeted organizations were successfully compromised, indicating potential limitations of AI-driven attacks

Threat Modelling Isn’t a Security Exercise — It’s a Design Discipline

When treated as a continuous, collaborative practice rather than a security checkbox, threat modelling improves architectural clarity and encourages engineers to think about risk before code is written. Ultimately, embedding threat modelling early leads to cleaner designs, better documentation, and a stronger security culture across the entire team.

Key Takeaways are:

  • Threat modelling is a mental model and design practice, not a vulnerability-hunting exercise owned solely by security teams

  • It should be a continuous, evolving process that adapts alongside architectural and system changes

  • Practising threat modelling early results in clearer architectures, more focused testing, and a more security-aware engineering culture

AI-Powered Cyber Espionage: A Groundbreaking Threat Landscape

Anthropic disclosed a sophisticated AI-orchestrated cyber espionage campaign allegedly conducted by a Chinese state-sponsored group, demonstrating how AI can autonomously execute complex cyber attacks with minimal human intervention.

Key Takeaways are:

  • AI systems can now autonomously perform 80-90% of cyber operations, from reconnaissance to data exfiltration
  • The attack used role-play social engineering to manipulate Claude AI into executing malicious tasks
  • Only a small percentage (10-17%) of targeted organizations were successfully compromised, indicating potential limitations of AI-driven attacks

Threat Modelling Isn’t a Security Exercise — It’s a Design Discipline

When treated as a continuous, collaborative practice rather than a security checkbox, threat modelling improves architectural clarity and encourages engineers to think about risk before code is written. Ultimately, embedding threat modelling early leads to cleaner designs, better documentation, and a stronger security culture across the entire team.

Key Takeaways are:

  • Threat modelling is a mental model and design practice, not a vulnerability-hunting exercise owned solely by security teams

  • It should be a continuous, evolving process that adapts alongside architectural and system changes

  • Practising threat modelling early results in clearer architectures, more focused testing, and a more security-aware engineering culture

TIPS & TRICKS

Threat Modeling Tool Directory

As part of our research for the presentation on threat modeling automation and tooling, we examined available threat modeling tools and compiled a list, which is available below on GitHub. This directory focuses exclusively on threat modeling tools—software, code, libraries, or services that automate, guide, or support the design-time threat modeling process.

Our trainings & events for 2025

Book a seat in our upcoming trainings & events

Half day Workshop Threat Modeling with AI, in-person, CodeMash Conference, Ohio

12-16 January 2026

2-Day Training: AI Threat Modeling Next Generation: From Whiteboard Hacking to Hands-on Prompting, in-person, London OWASP Training Days, UK

25-26 Feb 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, US Cohort

March 2025

Half day Workshop Threat Modeling with AI, in-person, CodeMash Conference, Ohio

12-16 January 2026

2-Day Training: AI Threat Modeling Next Generation: From Whiteboard Hacking to Hands-on Prompting, in-person, London OWASP Training Days, UK

25-26 Feb 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, US Cohort

March 2025

Advanced Whiteboard Hacking – aka Hands-on Threat Modeling, NorthSec Training, Montreal

10-11 May 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

June 2025

Advanced Whiteboard Hacking – aka Hands-on Threat Modeling, NorthSec Training, Montreal

10-11 May 2026

Threat Modeling Practitioner training, hybrid online, hosted by DPI, Europe Cohort

June 2025

TBD

TBD

Start typing and press Enter to search

Shopping Cart