Microsoft updates Secure Development Lifecycle to tackle AI-specific threats
Expanded SDL framework introduces tools for threat modelling, memory protections and safe agent behaviour in AI systems
Microsoft has expanded its Secure Development Lifecycle (SDL) programme to address security risks unique to artificial intelligence, launching a tailored framework to guide safe development and deployment of AI systems.
The company said the new SDL for AI recognises that artificial intelligence collapses traditional security boundaries and introduces new risks through inputs such as prompts, plugins, external APIs, model updates, memory states and retrieved data.
AI-specific threats include prompt injection, data poisoning and malicious tool interactions, according to Microsoft.
The updated framework offers specialised guidance for key areas such as AI system observability, memory protections, identity management, role-based access control (RBAC), safe model publishing and controlled system shutdowns.
Related reading
- Microsoft adds chemistry and error-correction tools to its quantum software kit
- Microsoft launches Windows 365 for Agents to run autonomous AI securely in the cloud
- Buyer’s guide identifies six top voice AI platforms for enterprise deployment in 2026
Microsoft described SDL for AI as a “dynamic framework” that integrates research, policy, standards, tooling and cross-functional collaboration to support continuous improvement.
The company said further updates will be published in future and directed developers and security teams to its security blog and documentation for more information.
The Recap
- Microsoft expands its Secure Development Lifecycle to cover AI security.
- SDL for AI centres on research, policy, standards, and enablement.
- The company said it will publish further updates and guidance.