Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Microsoft updates Secure Development Lifecycle to tackle AI-specific threats

Expanded SDL framework introduces tools for threat modelling, memory protections and safe agent behaviour in AI systems

Defused News Writer profile image
by Defused News Writer
Microsoft updates Secure Development Lifecycle to tackle AI-specific threats
Photo by Scott Webb / Unsplash

Microsoft has expanded its Secure Development Lifecycle (SDL) programme to address security risks unique to artificial intelligence, launching a tailored framework to guide safe development and deployment of AI systems.

The company said the new SDL for AI recognises that artificial intelligence collapses traditional security boundaries and introduces new risks through inputs such as prompts, plugins, external APIs, model updates, memory states and retrieved data.

AI-specific threats include prompt injection, data poisoning and malicious tool interactions, according to Microsoft.

The updated framework offers specialised guidance for key areas such as AI system observability, memory protections, identity management, role-based access control (RBAC), safe model publishing and controlled system shutdowns.

Microsoft described SDL for AI as a “dynamic framework” that integrates research, policy, standards, tooling and cross-functional collaboration to support continuous improvement.

The company said further updates will be published in future and directed developers and security teams to its security blog and documentation for more information.

The Recap

  • Microsoft expands its Secure Development Lifecycle to cover AI security.
  • SDL for AI centres on research, policy, standards, and enablement.
  • The company said it will publish further updates and guidance.
Defused News Writer profile image
by Defused News Writer

Read More