OpenAI has launched GPT-5.5, its latest frontier model, and deployed it across Nvidia's global workforce through Codex, the company's agentic coding application, in what the two firms are calling a landmark enterprise AI agent deployment.
The model runs on Nvidia's GB200 NVL72 rack-scale systems, which the chipmaker says deliver 35 times lower cost per million tokens and 50 times higher token output per second per megawatt compared to previous-generation infrastructure.
GPT-5.5 was released on 23 April, just 48 days after GPT-5.4, continuing a compression in OpenAI's release cadence that has seen the interval between flagship models shrink from roughly four months to under two.
More than 10,000 Nvidia employees across engineering, product, legal, marketing, finance, sales, HR, operations and developer programmes have been given access to the GPT-5.5-powered Codex app, with engineers reporting measurable productivity gains after several weeks of use.
Debugging cycles that previously stretched across days are closing in hours, and experiments on complex, multi-file codebases that once required weeks are completing overnight, the company said.
In one example, Codex analysed weeks of production traffic patterns and generated custom load-balancing algorithms that increased token generation speeds by more than 20%.
The deployment is configured for enterprise security, with each employee assigned a dedicated cloud virtual machine accessible via remote Secure Shell (SSH) connections.
Agents operate in auditable sandboxes with read-only permissions to production systems, and a zero-data-retention policy governs the entire deployment.
The companies framed the rollout as the product of more than a decade of collaboration.
OpenAI provides feedback that informs Nvidia's hardware roadmap and in return gains early access to new architectures, a relationship that produced the joint bring-up of the first GB200 NVL72 100,000-GPU cluster.
That cluster completed multiple large-scale training runs and set what Nvidia described as a new benchmark for system-level reliability at frontier scale.
GPT-5.5 was co-designed for, trained with and served on both GB200 and GB300 NVL72 systems.
OpenAI has committed to deploying more than 10 gigawatts of Nvidia systems for next-generation model infrastructure.
Related reading
- Nvidia says the open versus proprietary AI debate is the wrong argument
- Nvidia partners with energy giants to make AI data centres flex with the power grid
- Nvidia and energy partners unveil flexible AI data centres to ease grid pressure
"Let's jump to lightspeed. Welcome to the age of AI," Nvidia chief executive Jensen Huang told employees in a company-wide message.
GPT-5.5 is also rolling out to OpenAI's paid subscribers across Plus, Pro, Business and Enterprise tiers, with the cost economics of the new Blackwell infrastructure underpinning a broader push to bundle ChatGPT, Codex and an AI browser into a single service.
The recap
- OpenAI powered Codex with GPT-5.5 on NVIDIA GB200 NVL72 systems.
- GB200 NVL72 delivers 35x lower cost per million tokens.
- OpenAI plans more than 10 gigawatts of NVIDIA systems deployment.