OpenAI, the artificial intelligence company, has launched Codex Security, a tool that scans code repositories directly rather than importing output from conventional security products.
The tool does not begin by ingesting static application security testing (SAST) reports, which analyse code for vulnerabilities without executing it, but instead inspects repositories, reasons about intended behaviour, and validates findings before escalating them to human reviewers.
OpenAI says traditional SAST tools make approximations to achieve scale and often cannot determine whether in-code defences actually enforce the protections they are meant to provide.
Codex Security focuses on behaviour and constraint propagation, pulling relevant code paths, reducing problems to small testable slices, and generating micro-fuzzers and sandboxed proofs-of-concept where possible.
The system uses a Python environment with z3-solver, a formal verification tool, to check complex constraints such as integer overflows or issues arising from non-standard system architectures.
OpenAI argues that seeding a security agent with a SAST findings list creates three failure modes: it narrows the tool's attention prematurely, entrenches assumptions about sanitisation and trust boundaries, and obscures whether the agent discovered issues independently or merely inherited them from the source report.
Related reading
- OpenAI redesigns AI agent defences against manipulation attacks that mimic human social engineering
- OpenAI gives its developers API a built-in computers to run complex, multi-step AI tasks
- OpenAI partners with US national laboratory to speed up federal infrastructure permitting
The company says Codex Security therefore, starts from the repository context and uses validation to raise confidence before interrupting a reviewer.
OpenAI acknowledges that SAST tools remain valuable for enforcing secure coding standards, identifying known vulnerability patterns, and providing defence-in-depth alongside other security measures.
The recap
- OpenAI explains why Codex Security avoids starting with SAST reports
- Codex Security validates issues with sandboxed execution and micro-fuzzers
- Users directed to documentation to learn scanning and validation details