Despite growing efforts to improve application security through a Shift Left strategy, including the use of: Threat Modelling, SAST (Static Application Security Testing), SCA (Software Composition Analysis), DAST (Dynamic Application Security Testing), and Red Teaming – the adoption of AI/ML technologies introduces new risks that cannot be fully addressed by traditional methods.

AI-specific risks such as bias, opacity, unpredictability, data misuse, and model drift require an additional cybersecurity layer: AI Risk Assessment (AIRA).

This approach outlines how to integrate AIRA across the development phases – Plan/Design, Code, Build, and Test – in conjunction with your existing security practices.

Phase 1: Plan/Design – Risk by Design

Objectives

  • Identify AI-specific risks early
  • Align with legal, regulatory, and ethical frameworks (e.g., EU AI Act, EU CRA, ISO 42001, NIST AI RMF)
  • Establish risk thresholds, safety constraints, and model governance strategies

Activities

  • AI Use Case Risk Classification: Categorize use cases based on potential harm (e.g., minimal risk to prohibited under the EU AI Act)
  • Impact and Harm Assessment: Evaluate risks to individuals, groups, or society
  • Data Governance Planning: Ensure transparency in data source, quality, and bias mitigation
  • Accountability Mapping: Define roles for AI model ownership, explainability and escalation procedures

Outputs & Tools

  • AI Risk Register
  • AI Threat Modelling (complementing traditional threat modelling)
  • Ethics Review Checklist
  • Early-stage Model Cards / Datasheets for Datasets

Phase 2: Code – Secure and Responsible Development

Objectives

  • Implement secure, explainable, and bias-aware model logic
  • Detect and prevent vulnerabilities not captured by traditional code scanning tools

Activities

  • Bias-Aware Coding Practices: Monitor for proxy variables or discriminatory logic
  • Explainability Hooks: Integrate tools like SHAP, LIME, or counterfactuals for model interpretability
  • Model Version Control & Traceability: ML Repos (e.g., DVC): Repositories specifically designed to manage and version machine learning models, datasets, and experiments MLflow Records: Metadata entries that track model parameters, metrics, versions, and lineage to ensure auditability and reproducibility
  • Input Validation for Models: Defend against input-based attacks (e.g., poisoning, prompt injection)

Outputs & Tools

  • AI-Specific Secure Coding Standards
  • Source Code Review with AI-aware SAST
  • Versioned ML Repos (e.g., DVC, Git-LFS)
  • Logged MLflow Records for audit trails and rollback

Phase 3: Build – Secure Composition and Compliance Automation

Objectives

  • Ensure AI artifacts (models, datasets, pipelines) are secure, compliant, and traceable
  • Extend traditional SCA to include AI-specific dependencies

Activities

  • AI-SCA (AI Software Composition Analysis): Analyse AI/ML components for vulnerabilities, licensing, and provenance
  • Model Compression & Obfuscation Review: Validate integrity and reproducibility post-quantization or pruning
  • Compliance in CI/CD Pipelines: Integrate AIRA checks as automated gates, like SAST/DAST pipelines, including: Risk scoring, Model drift detection, Metadata tagging

Outputs & Tools

  • AI Software Bill of Materials (AI-SBOM)
  • Automated Testing Pipelines
  • AI Security Gates in CI/CD (e.g., via GitHub Actions, GitLab CI)

Phase 4: Test – Trust and Safety Validation

Objectives

  • Uncover hidden or emergent threats
  • Validate trustworthiness, fairness, and safety before deployment

Activities

  • AI Red Teaming / Adversarial Testing: Stress test models for robustness (e.g., adversarial examples, prompt injection)
  • Bias & Fairness Testing: Use tools like Aequitas, Fairlearn, or What-If Tool to assess discrimination across user groups
  • Explainability Testing: Ensure model decisions are interpretable and support risk acceptance decisions
  • Model-in-the-Loop Simulation: Simulate AI performance in real-world contexts (e.g., user flows, edge cases)

Outputs & Tools

  • Model Robustness & Fairness Reports
  • Ethics Validation Checklist
  • Regulatory Readiness Checklist
  • Complementary use of DAST and Pentest for interfaces hosting models (e.g., APIs, web UIs)

Resume: Integrating AIRA with DevSec practices

Article content

Implementing AI Risk Assessment (AIRA) within the development lifecycle is essential to building secure and trustworthy AI systems.

By treating AI risk as a core part of the DevsecOps – alongside Threat Modelling, SAST, SCA, DAST, and Red Teaming – organizations can address emerging threats, ensure regulatory compliance, and foster AI that is safe, explainable, and resilient by design.

At InnoWave, we empower organizations to turn uncertainty into opportunity through AI-driven cybersecurity solutions.

Written by Sergio Sa