TMI newsletter 27 – Unleashing the Power of Threat Modeling

Welcome

In this Summer edition of our newsletter, Brook Schoenfield shares his insights on a wider adoption of threat modeling practices, in terms of tech challenges, risk problems, leadership and building a supportive culture that will help constant improvement in threat modeling. We also highlight some useful tips on threat modeling with FAIR STRIDE and AI-based attacks.  

Enjoy the read… and the Summer! 

The TMI line-up for this month:

  • Unleashing the Power of Threat Modeling: Overcoming Challenges, Building Community, and Driving Adoption, written by Brook S.E. Schoenfield
  • Instant Threat Modeling – #10 Adversarial ML & AI, by SecuRingPL
  • Toreon presentation: OWASP SAMM Threat Modeling: From Good to Great presentation, by Sebastien Deleersnyder
  • An update on our upcoming training sessions

Unleashing the Power of Threat Modeling: Overcoming Challenges, Building Community, and Driving Adoption

By Brook S.E. Schoenfield, author, CTO, and Chief Security Architect Resilient Software Security 

In the realm of software development, the adoption of threat modeling practices presents its own set of challenges. While discussions often revolve around developer resistance, it is crucial to recognize that this perception may not reflect the norm. In truth, developers face various obstacles, such as doubt, heavy workloads, and overwhelming demands. But outright battle? Not so much[1].

That is not to say that there aren’t any challenges, sometimes significant ones. Whenever a new element is introduced to an established process, hurdles naturally arise.

For those lucky enough to “begin at start” (to quote Sue Negrin), things will be simpler. We can set a pattern that we then fulfill rather than needing to add or insert, which will cause greater disruption.

There will still be challenges, there are always challenges! Isn’t that why we come to work? To meet the complex beauty of untying Gordian knots and then putting the threads back together again in more effective, more manageable, and sometimes, just cleaner knots (i.e. processes, tools, algorithms, solutions)?

Here, I will focus on threat modeling when used as a critical input to secure design, implementation, and validation for building software that can meet its security requirements, and that has a real chance of surviving the attacks we expect to be exercised against it. That makes threat modeling a very important part of generating software[2].

Nearly 100% of the time, when I help threat modeling get adopted by a development organization, there is already a process and set of tools established into which threat modeling will need to fit. I believe therein lies “the rub” as Shakespeare so brilliantly put it.

So, what are the problems I encounter the most? It’s not outright developer hostility! Many, if not most developers love learning something new as long as it’s engaging. If that something is done as part of producing software, it will have to absolutely demonstrate value. “Value” in this case can mean any of the following:

  • Facilitation;
  • Ease;
  • Acceleration;
  • Improvement;
  • Completeness

At the same time, that new thing mustn’t add significantly to the burdens inherent in designing, writing, and testing software. Building great software isn’t usually trivial, which is why we love doing it, and why many of the tasks are so very compelling. There are boring parts; there are tasks that can be exhausting. Hence, my comment above about being overworked and overwhelmed. “Not another thing I have to worry about and do!”.

Still, if a task is engaging, when it holds significant conundrums, developers will line up to do it. What better mystery than figuring out the myriad ways others can misuse a system? Add in divining the best collection of workable defenses: and now there’s a set of problems worth cracking one’s brain on.

Let’s start with the problem we must solve: Which attacks and what defenses?

It’s true that identifying the comprehensive set of attacks that are relevant to a particular system turns out to be non-trivial. Few directly applicable resources exist, sadly.

There are a few supporting data sets, like MITRE’s ATT@CK/D3FEND, Common Weakness Enumeration (CWE), or Common Attack Pattern Enumeration and Classification (CAPEC) which my threat modeling students sometimes use. NIST’s Mobile Threat Catalogue is very good, but tightly constrained to a single use case. I’ve validated with several of my teaching peers that finding all the relevant attacks is one of the biggest stumbling blocks practitioners will have to surmount.

Then there’s the risk problem. Ugh[3]. There is so much poor risk rating, with CVSS (Common Vulnerability Severity Score), which is a potential severity rating, very much not risk, as the “de facto” industry practice. The mistake of substituting risk with CVSS leads to all manner of untold suffering, i.e., preventable compromises. I’ve lectured on rating risk for years; there isn’t enough space here to address workable solutions. Whatever you do, make sure that your method rests on a solid risk calculation foundation like Factor Analysis of Information Risk (FAIR), for example.

In the technical sphere, identifying workable, buildable defenses is also non-trivial. Partly, the defenses and mitigations for any exploitation technique coupled with its weakness(es) are M:N, many to many. Check the buffer size before the copy to prevent stack buffer overflow and escape your web inputs and/or perform output encoding to prevent Cross-Site Scripting (XXS): 1:1

But knowing that finding every overflow mistake is hard and that we sometimes forget to send the input to an escaping routine, what other defenses should we build? That’s not always easy. Plus, we’re constrained by the current available infrastructure and investment, and by operational costs. What we need may very well not exist or its implementation may be (highly) flawed such that again, we mustn’t count on any single mitigation as “one and done”.

Meeting the tech challenges is one part of the journey. In my experience, each of these will take time, and concerted effort, lots of repeated training, effective modeling, and plenty of expert support.

But we’re not done yet, not by a long shot.

Noopur Davis calls it, “culture hacking”, and there isn’t a better term out there. When I help to build and lead a threat modeling effort, I spend as much time on building a supportive culture as I do on the tech aspects I’ve outlined, above.

  • Leadership – in the best sense of the word
  • Community
  • Pro-social modeling
  • Obvious and measurable improvement

In my book, “Secrets Of A Cyber Security Architect” (available at https://www.amazon.com/Insiders-Guide-Cyber-Security-Architecture/dp/1498741991), I dedicated significant space to delve into each of these topics. These subjects are expansive and deserve more in-depth coverage than what an article can provide. Some of them could even warrant an entire book on their own. However, let me touch upon them briefly here.

By leadership, I mean obviously and demonstrably caring about both outcomes and the process for achieving them. How we make change must be just as important as the objectives we are trying to meet. The means must exemplify the ends; the ends, no matter how laudable, can never justify awful, unjust, oppressive means, or smart, empowered developers will resist.

The words “responsible” and “accountable” come to mind when contemplating leadership. To me, leadership entails taking full responsibility for the outcomes. If something is not right or if what we are doing is dysfunctional, it is my responsibility to make it right. If things aren’t going well, it’s on me. This sense of accountability is crucial for developers to trust a threat modeling effort.

While often overlooked in literature, many of the expert leaders in threat modeling programs understand the significance of fostering a community of practice. This entails supporting and empowering developers to build threat models as an integral part of software development. A strong sense of community provides several benefits, including:

  • Support and organic mentoring as the community matures.
  • A shared sense of purpose, values, methods, and techniques.
  • A natural feedback loop for continuous improvement.
  • A platform for addressing common problems.
  • A channel for ongoing education.

While there may be alternative approaches to achieving these goals, a vibrant community inherently encompasses all these aspects. Nurturing and valuing the community will naturally bring forth these benefits.

In my book, “Secrets of A Cyber Security Architect,” I delved into the extensive body of research supporting the concept of “pro-social modeling,” which involves embodying the values and behaviors we wish everyone to embrace. While it may seem magical, pro-social modeling is the closest thing we have when it comes to achieving effective developer-centric threat modeling. However, it’s important to note that there is no actual magic involved. If we desire others to prioritize the security of their designs, we must genuinely care and demonstrate that care through our actions and the application of necessary skills.

I refer to this as “demonstrating value.” Throughout my extensive experience, particularly in the early stages of establishing a practice, I have often encountered initial reluctance. However, by simply doing the work, identifying crucial issues, and explaining their importance in terms of risk, I always witness a shift in mindset. Many developers genuinely care about the quality of their work and want it to withstand exploitation attempts. Finding bugs is an integral part of a developer’s role, and threat modeling offers a unique approach to uncovering vulnerabilities that may go unnoticed by other techniques.

This leads me to my final point: “Obvious and measurable improvement.” When weaknesses are addressed, security gaps are filled, and software demonstrates a robust security posture, people, particularly developers, take notice. While not all developers may be inclined to take on threat modeling themselves, some will be motivated to acquire the necessary skills. The observable and tangible progress serves as a powerful catalyst for further engagement and growth.

Intriguingly, during my time at Cisco, I discovered an interesting tipping point for instilling focus on security within a development team. Surprisingly, it didn’t require a majority or even a large minority to drive this change. When just two individuals, such as the security person and one other team member, prioritize security considerations, there is a chance that others may listen. But when a third person joins in, a remarkable shift occurs, capturing the attention of the entire team. With just three people, the team “tips” towards caring, leading to collective action. Importantly, these three individuals don’t necessarily have to be leaders; it is their genuine concern that drives team behavior.

This is precisely why I never turn away assistance from anyone, regardless of their seniority or technical expertise. Whether they are junior, senior, apprentice, journey-level, expert, master, or possess different technical backgrounds, I value their contribution. Once the tipping point is reached, most team members will embrace the work, too.

People naturally have the desire to witness progress and observe even the slightest improvements. By generating excitement and attracting proactive individuals who carry influence, I can propel the team closer to the crucial tipping point of complete adoption, where “threat modeling is what we do.”

[1] I’m focusing on developers in this article. I most certainly have encountered resistance, defiance, outright battle, passive aggression, and even downright sabotage from managers, mid-management, and executives. Those are somewhat different problems.

[2] There are other uses for threat modeling not covered here.

[3] While I luckily was introduced to Factor Analysis of Information Risk (FAIR) during its development circa 2005-6, most practitioners are not so fortunate. FAIR is available as an Open Group standard.

Curated threat modeling content

Have you ever wondered what the ROI is on a security control? Or whether you should spend time fixing 2 highs or 47 mediums? FAIR STRIDE is a method for creating application threat models that can answer these questions to help define a roadmap toward scalable risk reduction for a product.

Interested in a nice overview of attacks for threat modelers that assist in threat modeling of systems based on or using AI? Check this out:

Toreon blog: OWASP SAMM Threat Modeling: From Good to Great

written by Toreonite Sebastien Deleersnyder

In last month’s blog post, we published an article from the OWASP Germany Day, OWASP SAMM Threat Modeling: From Good to Great. We will publish the recording soon, but you can already download the presentation on the same page now!

Read more

We aim to make this a community-driven newsletter and welcome your input or feedback. If you have content or pointers for the next edition, please share them with us.

Kind regards,
Sebastien Deleersnyder
CTO, Toreon

Book a seat in our upcoming trainings

  • Threat Modeling Practitioner training, hybrid online, hosted by DPI (cohort starting on 18 September 2023)
  • Threat Modeling Medical Devices training, hybrid online, hosted by DPI & Medcrypt (cohort starting on 11 September 2023)
  • CISO Training Module 5: Threat & Vulnerability Management, in-person, hosted by DPI (cohort starting on 24 October 2023)
  • Advanced Whiteboard Hacking a.k.a. Hands-on Threat Modeling, in-person, hosted by Black Hat USA, Las Vegas (5-8 August 2023)

We also organize in-company training for groups as of 10 participants.

Start typing and press Enter to search

Shopping Cart