Threat modeling (a structured, proactive process for identifying, analyzing, and mitigating potential security threats to systems, software, or networks by simulating attacker perspectives) remains essential in this context. It provides a structured way to analyze systems, understand attack surfaces, and identify vulnerabilities. This includes AI-specific risks such as data exposure, model manipulation, or misuse through adversarial prompting. The limitation is not with the threat modeling method itself. Threat modeling is fully capable of accommodating AI-specific risks. The limitation is structural as in many organizations, threat modeling still functions as a checkpoint rather than a continuous practice. Findings are documented, recommendations are issued, and responsibility is handed over. This was already a problem for traditional systems. For systems that have AI-components, this becomes crucial.
AI-related mitigations tend to span multiple teams e.g., data engineering, MLOps, product, legal and compliance. Timelines extend and ownership becomes unclear. Without a mechanism to track and reassess risks continuously, even well-executed threat models lose their value as the system evolves.