When Anthropic released Claude Mythos Preview in April, the reaction was not the usual cycle of benchmarks and blog posts. The Federal Reserve chairman and Treasury secretary convened calls with bank chief executives.
The White House began drafting executive actions. Global companies scrambled to assess whether decades-old vulnerabilities in their operating systems and browsers were about to be exploited at machine speed.
OpenAI's response, GPT-5.5-Cyber, arrived on Thursday, less than a month later. The model is a variation of its latest GPT-5.5, internally nicknamed Spud, with guardrails loosened for cybersecurity tasks such as vulnerability identification, patch validation and malware analysis.
It is being offered to vetted members of OpenAI's Trusted Access for Cyber programme, a broader distribution model than Anthropic's more restrictive approach of granting access to roughly 40 organisations through its Project Glasswing initiative.
The competitive framing is obvious, but the more important story is what the parallel releases reveal about the state of AI and cybersecurity, and the widening gap between what these models can do and what governments have done to regulate them.
Both companies have crossed the same threshold. Their latest models can autonomously find previously unknown software vulnerabilities, chain them together into working exploits and, in testing, complete multi-step simulated corporate cyberattacks. The UK AI Security Institute reported last week that GPT-5.5 completed a 32-step simulated attack in two out of ten test runs; Mythos managed the same in three out of ten.
Cybersecurity researchers have since demonstrated that even older models from both companies can reproduce many of Mythos's findings through orchestration techniques, suggesting the capability is not unique to a single frontier model but is emerging across the industry.
The implications are uncomfortable. The same technology that can help defenders scan codebases and close vulnerabilities can, in the wrong hands, automate the discovery and exploitation of flaws on a scale that no human red team could match.
Anthropic has been explicit about the risk, describing Mythos as "currently far ahead of any other AI model in cyber capabilities" and warning that it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
The two companies have adopted strikingly different release strategies. Anthropic has kept Mythos tightly controlled, limiting access to major infrastructure operators including Apple, Amazon, JPMorgan Chase and Palo Alto Networks, committing $100 million in usage credits and $4 million in donations to open-source security organisations.
OpenAI is taking a more open approach, making its cyber model available to a broader community of defenders through its Trusted Access programme, which it has been scaling to thousands of individual defenders and hundreds of teams.
Neither approach resolves the central tension: every model released to defenders also demonstrates capabilities that attackers will eventually replicate, whether through the same models, open-source alternatives or orchestration of existing tools.
The White House is now actively discussing executive actions that could change how the federal government is involved in future model releases, a conversation that was largely theoretical before Mythos made it urgent.
Related reading
- Five things you didn't know about ChatGPT 5.5, including the one stat OpenAI doesn't want you to see
- CrowdStrike integrates Anthropic’s Claude Opus 4.7
- OpenAI explains why ChatGPT developed a goblin obsession and why it took six months to fix
The speed with which both Anthropic and OpenAI have moved to release cyber-capable models, each citing the other's progress as justification, suggests the competitive dynamic may be outrunning the policy process.
For the cybersecurity industry, the immediate priority is defensive: using these models to find and fix vulnerabilities before attackers can exploit them. But the longer-term question is whether the offence-defence balance has permanently shifted, and whether the regulatory frameworks needed to manage that shift can be built fast enough to matter.
The recap
- OpenAI launches a cyber model in release
- Release described as limited; positioned against Anthropic's Mythos
- Timeline for a broader release was not provided