The announcement is precise about what each company brings. AMD contributes the computing core: Instinct MI450 Series GPUs and the broader Helios architecture.
Celestica contributes the plumbing: scale-up networking switches engineered to Open Compute Project Open-Rack-Wide specifications, using advanced networking silicon and the Ultra Accelerator Link over Ethernet architecture to move data between GPUs at the speeds large AI clusters require.
That division of labour matters. One of Nvidia's most durable advantages is not the GPU itself but the full-stack integration around it, particularly NVLink, the proprietary interconnect that lets thousands of GPUs behave as a single coordinated system.
AMD is building toward a comparable answer, and Celestica's manufacturing and engineering depth gives it a credible route to market without building that capability from scratch.
The platform targets cloud providers, enterprises and research institutions, with availability promised in late 2026. That timing puts Helios in direct competition with Nvidia's Vera Rubin, which is also scheduled to ship in the second half of next year.
Related reading
- Kraken launches free weekly crypto market show with NinjaTrader
- Microsoft validates NVIDIA Vera Rubin NVL72 in the cloud
- Researchers release open-source tool to pinpoint exactly where AI agents go wrong
The framing from both executives is telling. Celestica's Steven Dorwart emphasises delivery speed and supply chain resilience. AMD's Forrest Norrod calls Helios "a new blueprint for AI infrastructure." Neither mentions Nvidia by name.
They do not need to. The market understands the subtext. Late 2026 will clarify whether the blueprint holds.
The recap
- AMD and Celestica will deliver the Helios rack-scale AI platform.
- Switches based on OCP Open-Rack-Wide form-factor for scale-up networking.
- Helios will be available to customers in late 2026.