The U.S. Department of Commerce removed online details of an arrangement that would give officials earlier access to artificial intelligence models from Google, xAI and Microsoft for security testing.
The move follows an announcement last week that framed the arrangement as part of Washington’s push for earlier visibility into advanced systems that could pose national security risks.
A review of the agency’s website found the original announcement link now returns an error page saying, “Sorry, we cannot find that page,” Reuters reported. The link currently redirects to the Center for AI Standards and Innovation, the government group charged with running the evaluations.
Related reading
- OpenAI is outmanoeuvring Anthropic on cyber diplomacy, and Europe is the prize
- Anthropic's Claude Mythos faces questions over value despite strong cybersecurity scores
- OpenAI warns macOS users to update apps after supply chain security breach
The department said in an announcement the deal lets the center evaluate models’ capabilities and security risks before public release. The department said the center has already conducted more than 40 evaluations, including on advanced models not yet publicly available, and that developers sometimes provide versions without safety guardrails so risks can be fully assessed.
The agreement builds on 2024 arrangements with OpenAI and Anthropic, reached when the center operated as the U.S. Artificial Intelligence Safety Institute. Officials have cited recent concerns about model misuse: the Pentagon’s tech chief, Emil Michael, highlighted potential risks from Anthropic’s Claude models and flagged Mythos for its advanced cyber capabilities. It is not immediately clear why the Commerce Department removed the page.
The recap
- Commerce Department removed online AI testing details.
- Agency has conducted more than 40 model evaluations.
- The department did not provide a reason for removal.