Secure-by-Design in the AI Age: From Risk Identification to Ongoing Control

Secure-by-Design in the AI Age: From Risk Identification to Ongoing Control

  • AI breaks “classic” secure-by-design: risks keep shifting after deployment due to updates, reuse, and unpredictable behavior.
  • The problem is control, not detection: a threat modeling report often leads to fragmented follow-up and unclear ownership.
  • Secure-by-design for AI must be continuous: sustained accountability, tracking, and visibility into residual risk across the full lifecycle.
  • AI-rich companies hit the wall first: scale, key-person risk, and audit pressure make one-off reviews unsustainable.
  • Toreon makes it actionable with AI threat modeling: we pinpoint AI-specific vulnerabilities as the starting point for focused mitigation and provable compliance.

Security-by-Design was built for a different pace of change

Secure-by-design has long been a cornerstone of cybersecurity and risk  management. The principle consists of 3 practices; identify risks early,  mitigate them before deployment, and maintain control throughout the  system’s lifecycle. For traditional IT systems, this approach has been  effective, not because those systems are risk-free after launch, but because  their rate of change is generally slower and manageable. Patches are issued,  configurations are monitored, and periodic reassessments catch what shifts over time.

In practice, however, many organizations have reduced secure-by-design to a  front-loaded exercise: identify, mitigate, document, hand over. This leaves out  continuous monitoring of the system. That gap between the principle and  how it is practiced matters, because AI exposes it ruthlessly.

How AI accelerates existing challenges

AI systems do not introduce an entirely new category of problems.  Traditional systems also produce unintended behavior (bugs), suffer from  supply chain risks, and vulnerabilities in production.

What AI does is accelerate and intensify these dynamics. The reasons are  specific and worth distinguishing:

  • Data drift and retraining can change model behavior over time, sometimes deliberately, sometimes silently. A model that performed safely at launch or in a lab environment may not behave the same way a few months later.
  • Non-deterministic outputs mean that even without any change to the model itself, responses can vary across identical inputs, making exhaustive pre-deployment testing impossible.
  • Context-dependent risk profiles occur when the same model is reused across different workflows. A large language model powering an internal knowledge base carries different risks than the same model handling customer-facing interactions.
  • Misuse that does not originate from technical flaws, but rather from how AI decisions affect people, processes, or business outcomes. These impacts were difficult to anticipate during the design and testing phases.

These are not just abstract concerns. They are practical realities that demand  a more continuous and cross-functional approach to risk.

The role and limits of threat modeling for AI

Threat modeling (a structured, proactive process for identifying, analyzing, and mitigating potential security threats to systems, software, or networks by simulating attacker perspectives) remains essential in this context. It provides a structured  way to analyze systems, understand attack surfaces, and identify  vulnerabilities. This includes AI-specific risks such as data exposure, model  manipulation, or misuse through adversarial prompting. The limitation is not  with the threat modeling method itself. Threat modeling is fully capable of  accommodating AI-specific risks. The limitation is structural as in many  organizations, threat modeling still functions as a checkpoint rather than a  continuous practice. Findings are documented, recommendations are issued,  and responsibility is handed over. This was already a problem for traditional  systems. For systems that have AI-components, this becomes crucial.

AI-related mitigations tend to span multiple teams e.g., data engineering,  MLOps, product, legal and compliance. Timelines extend and ownership  becomes unclear. Without a mechanism to track and reassess risks  continuously, even well-executed threat models lose their value as the  system evolves.

The real gap: from identification to sustained control

The core challenge AI introduces is not that risks are harder to identify  (although some are). It is that the window between identification and  obsoletion of that assessment is dramatically shorter. Risks shift as models  drift, as usage patterns change, as new integrations are added. Controls that  were adequate at deployment may not hold a few months later.

This means secure-by-design, as a principle, doesn’t need to be replaced. It  needs to be practiced the way it was always intended: as an ongoing  continuous process, not a phase. For AI systems, as well for classical IT  systems, this means embedding continuous monitoring, periodic  reassessment, and clear cross-functional ownership into the standard  operating model not as an afterthought, but as a non-functional requirement.

The organizations that manage AI risk effectively are not the ones that  identify the most risks upfront. They are the ones that build the operational  capacity to maintain control as risks evolve.

From knowing risks to controlling them

What AI exposes is a fundamental gap between risk awareness and risk control. In traditional software security, a vulnerability is often binary: it is either patched or it isn’t. AI systems, however, introduce “liquid” risks. A model that is secure today can become a liability tomorrow due to data drift, model decay, or shifts in the operational context.

For AI, Secure-by-Design requires a transition from static documentation to operational discipline. It is not enough to identify a threat; organizations must ensure:

Continuous Ownership: Clearly defining who is responsible when model performance degrades or security boundaries shift six months after deployment.

Traceability of Mitigation: Moving beyond recommendations to verifiable proof that a threat modeling finding has been addressed in the code, the data pipeline, or the guardrail configurations.

Residual Risk Visibility: Maintaining a real-time ledger of which risks were accepted to meet time-to-market demands and setting triggers for when those risks must be reassessed.

Why AI-rich organizations feel this most

Organizations managing a handful of AI experiments can often rely on  manual oversight. However, as companies scale to dozens or hundreds of  use cases (the AI-rich organizations) this manual approach becomes a critical  bottleneck.

These organizations hit three specific walls:

Fragmentation: Different data science teams use varying tools and maturity  levels. Without a centralized platform, a unified view of the organization’s AI  risk posture is impossible to maintain.

Key-person Risk: Deep knowledge about specific model vulnerabilities often  resides only in the minds of a few specialized engineers. If they leave, the  “secure” understanding of the system leaves with them.

Regulatory Pressure: Frameworks like the EU AI Act do not just require you  to know your risks; they demand proof of how those risks are managed  across the entire lifecycle.

Without an operational governance layer, “Secure-by-Design” remains a  theoretical exercise that fails under the pressure of scale and audit.

The "Secure-by-Design" Joint Service: Toreon & Yields

The Secure-by-Design Joint Service is specifically designed to close this gap by transforming static risk identification into sustained, accountable control.

This model focuses on a collaborative service delivery where Toreon identifies the technical  cyber vulnerabilities of an AI system, and Yields provides the operational framework and tool to manage those risks until resolution.

Phase 1: Threat Identification (Toreon)

Toreon specialists conduct their standard cyber threat modeling, specifically looking for AI centric vulnerabilities such as security risk or people risk.

  • Gap Identification: Toreon identifies specific threats to the AI lifecycle, such as unauthorized data access, prompt injection, or model tampering.
  • Compliance Mapping: Each threat is mapped against global AI standards and internal policies to determine if it constitutes a breach of the EU AI Act or other frameworks.

Phase 2: Operationalization (Yields Integration)

Instead of delivering a static report that risks becoming “shelfware,” Toreon uses the Yields  External API to turn findings into a live governance workflow:

  • Finding Creation: Every vulnerability identified by Toreon is automatically listed as a “live finding” in Yields.
  • Stakeholder Assignment: To remove key-person risk, the system automatically notifies the relevant Risk Manager or ML engineer of their specific responsibilities. • Task Orchestration: Mitigation actions are managed via the Yields Task Manager, ensuring that the “Secure-by-Design” requirements are met as a “hard gate” before the model moves into production.

Phase 3: Continuous Monitoring & "Proof of Compliance"

Once the initial security threats are addressed, the platform maintains a “Proof of  Compliance” that reflects the system’s current risk posture in real-time.

  • Audit Readiness: The entire history of Toreon’s security findings and the subsequent fixes are logged, creating a transparent, immutable audit trail for regulators.
  • Efficiency Gains: For AI-Light companies, this process generates evidence with minimal effort from engineers. For AI-Rich companies, it provides the only way to maintain a bird’s-eye view of risk across hundreds of models.

Ready to see how your company can benefit?

Get in touch with our experts for a no-obligation advisory conversation.


Upcoming Events/Webinars

Connect-IT

You can find us at Connect-IT in May. Our HR team will help you explore new career opportunities and show you what working at Toreon is like.

Start typing and press Enter to search

Shopping Cart