Nvidia's open-source future of AI. Sort Of.
Nvidia turned up at NeurIPS with a familiar message: yes, it still dominates AI hardware, and now it would very much like some credit for being “open” too.
The company has rolled out a fresh batch of models, datasets and tooling that it says will support research into both digital and physical AI. Think of it as Nvidia trying to make openness cool without letting go of the steering wheel.
The headline act is DRIVE Alpamayo R1, which Nvidia calls “the world’s first open industry scale reasoning vision language action model for autonomous driving.”
In plainer English, it is a model that mixes perception, language and planning so it can break down a driving scenario “and reason through each step,” according to Nvidia. The company pitches it as a kind of transparent brain for self-driving systems rather than a black box that improvises in traffic.
Nvidia is also flaunting sheer academic volume. Its researchers are presenting more than 70 papers and workshops at NeurIPS, covering everything from AI reasoning to advances in autonomous vehicle development.
It is the usual conference flex, but one backed by a new Openness Index from a group called Artificial Analysis. That index places Nvidia’s Nemotron models among the most open in the ecosystem, a description that conveniently aligns with Nvidia’s current marketing arc.
Alpamayo R1 will show up on GitHub and Hugging Face, and Nvidia is making a slice of its training data available through its Physical AI Open Datasets collection. The company has also open-sourced AlpaSim, a framework for evaluating the model’s behaviour. That is as far as the doors swing open for now.
The rest of the announcement reads like Nvidia reminding everyone where it still makes its money. It describes itself as a “global leader in AI computing” with reach across gaming, professional visualisation, data centres and automotive. In other words, even when Nvidia is talking openness, the subtext is business as usual.