The Hidden Risk in Your Security Design: Managing Unknowns and Assumptions in Threat Modeling

The Hidden Risk in Your Security Design: Managing Unknowns and Assumptions in Threat Modeling

In the world of cybersecurity, what you know can protect you, but what you assume can destroy you. Threat modeling is a foundational pillar of secure design, yet even the most rigorous models are often built on a shaky foundation of “unknowns” and “unspoken assumptions”. 

Whether you are “shifting left” leveraging modern DevOps processes or managing a legacy monolith, understanding how to document and validate these gaps is the difference between a resilient system and one waiting for a breach.

Exploring unknowns via the DICE framework

Most security teams follow the industry-standard “DICE framework for threat modeling. However, this approach does (by far) not prevent assumptions from being made.  

Just to refresh everyone’s mind, here are the 4 phases of this framework: 

  1. D is for Description of the context (What are we building?) 
  1. I is for Identification of threats (What can go wrong?) 
  1. C is for Countermeasure definition (What are we going to do about it?) 
  1. E is for Evaluation (Did we do a good enough job?) 

While most mature organizations have standardized the first three pretty well, validation of assumptions in the last phase is often overlooked. This makes sense, because to enable validation of unknowns and assumptions in the “Evaluation” phase, this obviously requires keeping track of them in the three phases that preceded it… 

This article introduces the concept of an “assumption register” and explores how to piggyback its creation off of the four phases of the DICE framework. 


What are we working on?

While seemingly a straightforward question, in threat models we create with our clients there’s rarely an all-encompassing answer.  

When threat modeling a system that doesn’t exist yet, technical details have often not crystallized yet. While this is mostly a by-product of threat modeling “too soon” in the design phase, it does present us with an opportunity to start documenting unknowns and assumptions. 

In existing / legacy systems the story is very different. Knowledge drain from staff churn and incomplete documentation often leaves significant gaps in knowing how the system actually functions. 

What can go wrong?

This source of unknowns is different from the other ones, mainly because it doesn’t have the word “we” in it. Consequently, the unknowns here originate from others’ actions and are an artefact of “you don’t know what you don’t know”. Specialized security expertise helps, but it’s a luxury many teams simply don’t have. This is where assumptions must be made but also where some important misasumptions are born.  

A few examples of these that we have come across in the past: 

  • The “Good employee” fallacy: Assuming attackers only come from the outside. In reality, employees can be bribed, threatened, vengeful or in need of cash. 
  • The “Security through obscurity” fallacy: Thinking “only we understand how this works” and/or “nobody would bother to figure this out”. Advanced adversaries use automated tools to uncover precisely what you think is hidden. 
  • The “Nobody would actually do this” fallacy: Underestimating the technical sophistication or motivation of potential threat actors. People who have relatively little to lose, can act in extremely unpredictable ways. 
  • The “Hypothetical attack; zero likelihood ” fallacy:  A great example of “you don’t know what you don’t know”; we have literally heard this from multiple system engineers when discussing VM-to-host-escapes and VLAN-hopping during threat elicitation. This is the easiest one to debunk if documented cases are available. Sometimes, indeed, there just aren’t any documented cases, and the threat is effectively hypothetical. Then you go ahead with documenting predictive assumption statements along the lines of  
  • “We assume that an exploit for breaking TLS1.2 will not to be developed during the lifetime of this IoT device”. 


What are we going to do about it?

Misassumptions also happen here, but for different reasons.  

  • The “Our cloud vendor surely takes care of this” fallacy Teams often assume that use of a major cloud provider solves all their security problems. This ignores the Shared Responsibility Model, where the vendor secures the “fabric” of their cloud, but you are responsible for securing your data in the cloud. 

 

Together with the predictive claim about the perceived lack of an expoit, the real assumptions that support the “unhackability” of the TLS1.2 connection are in this “What are we going to do about it”-phase: 

  • “We assume that our infrastructure team implements TLS1.2 according to the current industry standards”. 
  • “We assume that our infrastructure team DOES NOT implement TLS/SSL versions that precede TLS1.2” 
  • “We assume that the public-key-pinning mechanism prevents the IoT device from accepting any certificate other than the one we envisioned” 
  • “We assume that the HTTP library in the IoT device doesn’t accidentally connect to the HTTP endpoint with its authentication token to then be redirected” 
  • “We assume that HTTPS implementations of our infrastructure are being validated in our quarterly penetration tests”. 
  • “We assume that Certificate Authority X will refuse CSRs for our domain” 

Getting these assumptions down is important but the work it’s not done to come to an assumptions register; more on this in the next section. 

The Rule of Thumb: If you don’t have a definitive answer, you have two choices: 

  • If acquiring the answer is feasible in the short term, document it as an open question 
  • If acquiring the answer is not feasible (anymore) document it as an assumption in (drum roll…) 


The Assumption Register

A high-quality register could, for example, track the following properties for every assumption: 

  1. The Assumption: A clear statement (e.g., “We assume the end user has an antivirus installed”). If unclear from the assumption statement itself, a temporal nature can be added 
  1. Validation: A test that can prove the assumption is correct. (Not always feasible, like the case of a third party computer having A/V installed) 
  1. Confidence (+ Justification): How sure are you when stating this assumption? (e.g., “Almost guaranteed due to corporate environment policy”). 
  1. Consequence if it Fails: What happens if the assumption is wrong? (e.g., “The user downloads a virus via our platform and we have no server-side scanning to stop it”). 
  1. Owner: Person, department or role that owns the risk of getting the assumption wrong 


Conclusion: Don’t Let Unknowns Be Your Downfall

Threat modeling is a continuous process of discovery. By identifying your unknowns and tracking your assumptions, you turn “blind spots” into a prioritized list of things to validate. 

Next Step for Your Team: Review your current project’s architecture diagram. Pick one interaction across a trust boundary and ask yourself: “What are we assuming that, if true, makes this secure?” Conversely ask the opposite: “What are we assuming that, if false, makes this insecure?”  

Add those items to your Assumption Register. 

About the Author:

Georges’ lifelong curiosity about ‘how stuff works’ culminated in a Master’s degree in Electro-Mechanical Engineering. With over 15 years of experience in technical and managerial roles within the biotech industry, he developed a deep proficiency in programming and a passion for cybersecurity.

This unique combination of engineering logic and coding expertise makes Georges an ideal Application Security expert; he relates to the daily challenges of software developers while fully understanding the adversarial mindset of hackers. Since transitioning to AppSec in 2017, he has consulted for a wide variety of business contexts.

Georges joined Toreon in 2021, where he currently serves as the Product Owner for Threat Modeling Consulting. He is also the Lead Trainer for Toreon’s globally recognized ‘Whiteboard Hacking’ training. Leveraging his background in electronics, Georges is a key member of the hardware penetration testing team, with specific expertise in threat modeling for embedded medical and non-medical devices.

Schedule a call

Get in touch with our experts for a no-obligation advisory conversation.


Upcoming Events/Webinars

Supernova/Cybernova

You’ll be able to find us at both SuperNova on March 25th and March 26th, and CyberNova on March 24th. With Supernova being an internationally renowned event for tech and innovation, and this years first edition of Cybernova, focusing on all things related cybersecurity. Moreover we are also present at the Agoria boothspace.

Webinar Toreon x IriusRisk

This session is the first in our two-part series with IriusRisk and takes a maturity-based look at how threat modeling evolves as organizations scale from startup to enterprise.

You’ll learn how high-performing teams maintain consistent security standards while avoiding common pitfalls such as fragmented processes, inconsistent risk coverage, and duplicated or missed controls.

We’ll explore how the right mix of services, tooling, and best practices helps organizations build scalable, sustainable threat modeling programs, ensuring security keeps pace with business growth.

MAR 4918 FY26 Q1 IR Toreon Webinar 1 LinkedIn 1200x624 2

Start typing and press Enter to search

Shopping Cart