Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google DeepMind releases Gemini Robotics-ER 1.6 with improved spatial reasoning and instrument reading

The model, developed in collaboration with Boston Dynamics, is described as the safest in the Gemini robotics line to date

Defused News Writer profile image
by Defused News Writer
Google DeepMind releases Gemini Robotics-ER 1.6 with improved spatial reasoning and instrument reading

Google DeepMind has released Gemini Robotics-ER 1.6, an updated AI model designed to improve how robots interpret physical environments, with advances in spatial reasoning, multi-view perception and a new capability for reading industrial gauges and instruments.

The model continues a reasoning-first approach to robotic AI, focusing on the perceptual and planning capabilities robots need to operate reliably in unstructured real-world settings.

Google DeepMind said the model specialises in visual and spatial understanding, task planning and success detection, the core functions required for a robot to assess its environment, decide on a course of action and determine whether it has completed a task correctly.

The instrument-reading capability, which allows robots to interpret complex gauges and sight glasses, was developed in collaboration with Boston Dynamics, the robotics company best known for its quadruped and humanoid machines, and reflects the kind of industrial inspection use cases where autonomous robots are increasingly being deployed.

Google DeepMind said Gemini Robotics-ER 1.6 is the safest model in the robotics line to date, describing it as demonstrating "superior compliance with safety policies on adversarial spatial reasoning tasks," a measure of how reliably the model behaves when presented with inputs designed to provoke incorrect or unsafe responses.

Developers can access the model through the Gemini API and Google AI Studio from today.

The release comes as the broader robotics industry accelerates its adoption of foundation models, large AI systems trained on diverse data that can be adapted to specific physical tasks rather than programmed from scratch for each application.

Google DeepMind has positioned the Gemini Robotics line as infrastructure for that shift, with the ER designation indicating a model tuned specifically for embodied reasoning, the capacity to understand and act within a three-dimensional physical environment rather than operating purely in the digital domain.

The recap

  • Gemini Robotics-ER 1.6 improves spatial and multi-view reasoning
  • Adds instrument reading capability discovered with Boston Dynamics
  • Available to developers via Gemini API and Google AI Studio
Defused News Writer profile image
by Defused News Writer