Nvidia at CES: Autonomous driving, full-stack push and Vera Rubin
Here's the week's big takeaways from the world's largest company
Alpamo brings reasoning to autonomous driving
Nvidia has introduced Alpamo, a new artificial intelligence model designed to control vehicles end to end while also reasoning about its actions.
Trained directly from camera input through to steering, braking and acceleration, Alpamo is built on a large dataset combining human-driven miles, simulated data generated via Nvidia’s Cosmos platform and carefully labelled real-world examples. Unlike earlier systems, Alpamo is designed to explain its decisions and predicted trajectories, a capability Nvidia says is essential for safety validation and regulatory confidence.
Full-stack strategy marks a first for Nvidia
The company said every future vehicle running on its platform will be AI-powered, with Alpamo operating as the model layer and the carmaker’s software acting as the application layer.
This represents Nvidia’s first attempt to deliver an entire autonomous driving stack, spanning silicon, software and vehicle integration. The approach reflects Nvidia’s belief that autonomy requires tight coordination across hardware and software rather than loosely connected components.
Mercedes-Benz CLA set for phased launch
The first production vehicle to use the platform will be the Mercedes‑Benz CLA.
Nvidia said the car is expected to be on US roads in the first quarter, with a European launch in the second quarter and an Asia rollout later in the year. Alpamo will be updated continuously, allowing improvements to be deployed without waiting for new vehicle generations.
Safety and redundancy at the core
Safety has been positioned as central to the partnership. Nvidia said the Mercedes-Benz CLA has been rated the world’s safest car by NCAP, with every line of code, chip and system safety certified.
Alongside Alpamo, Nvidia has developed a second, independent autonomous vehicle software stack over five to seven years. This parallel system mirrors Alpamo’s functionality but is fully traceable. A policy and safety evaluator determines which stack is active at any moment, adding an extra layer of redundancy.
Siemens partnership targets AI-driven chip design
Beyond automotive, Nvidia announced a partnership with Siemens aimed at accelerating chip design and manufacturing.
The collaboration brings together CUDA-X, physical AI, agentic AI, Nemo and Neotron, with the goal of using AI-driven chip and system designers to work alongside Nvidia’s engineers. The company believes this approach could shorten development cycles and unlock new performance gains.
Vera Rubin system enters full production
Nvidia also detailed Vera Rubin, its next-generation computing platform designed to meet the rising computational demands of artificial intelligence.
The Rubin pod contains 1,152 GPUs across 16 racks, with each rack housing 72 Rubin units. Each unit consists of two connected GPU dies. The system is designed to scale to supercomputer levels while remaining deployable within standard data centre environments.
New CPU, networking and manufacturing advances
The Vera CPU delivers roughly twice the performance per watt of current leading CPUs, a key advantage as data centres face power constraints. It is paired with the Rubin GPU and supported by BlueField-4, a new processor that offloads networking, security and virtualisation tasks.
The Rubin chip is the first manufactured using TSMC’s silicon photonics process, enabling optical interconnects to be built directly into the chip and delivering 512 ports at 200 gigabits per second.
Together, the automotive and infrastructure announcements underline Nvidia’s ambition to control the full AI stack, from data centre to dashboard, as artificial intelligence moves deeper into the physical world.