NVIDIA powers training for GPT-5.2
OpenAI’s GPT-5.2 was trained and deployed on NVIDIA infrastructure, including Hopper and GB200 NVL72 systems.
OpenAI on Thursday launched GPT-5.2, and shortly thereafter, Nvidia highlighted the role its technology played in the training and deployment.
NVIDIA infrastructure, such as the Hopper and GB200 NVL72 systems, in particular, were in the spotlight.
According to an announcement, GB200 NVL72 systems delivered three times faster training performance on the largest model tested in the latest MLPerf Training benchmarks and nearly two times better performance per dollar compared with NVIDIA Hopper, while GB300 NVL72 delivered more than a four times speedup versus Hopper.
The majority of leading large language models were trained on NVIDIA platforms, the chipmaker highlighted, and that its stack supports speech, image and video generation as well as biology and robotics.
It cited other examples including Evo 2, OpenFold3 and Boltz-2, and noted Clara synthesis models generate realistic medical images.
The company said NVIDIA Blackwell is available from major cloud service providers and server makers, and that NVIDIA Blackwell Ultra is now rolling out from server makers and cloud providers. It named Amazon Web Services, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Nebius, Oracle Cloud Infrastructure and Together AI as providers offering Blackwell-powered instances.
The Recap
- OpenAI trained and deployed GPT-5.2 on NVIDIA infrastructure.
- GB200 NVL72 delivered three times faster training in MLPerf.
- NVIDIA Blackwell Ultra is rolling out from cloud providers.