Google DeepMind Unveils Gemini Robotics for Advanced AI-Powered Robots

Google DeepMind Unveils Gemini Robotics for Advanced AI-Powered Robots

Google DeepMind has introduced two new AI models, Gemini Robotics and Gemini Robotics-ER, based on their Gemini 2.0 model, designed to enhance robotic capabilities in the physical world, as announced in a recent blog post by Google DeepMind.

Google DeepMind has launched two innovative AI models, Gemini Robotics and Gemini Robotics-ER, based on their Gemini 2.0 model, announced on their website. These models are designed to enhance the capabilities of robots by integrating vision, language, and physical action, thus enabling them to perform a wider range of real-world tasks. The Gemini Robotics model is an advanced vision-language-action (VLA) model that allows robots to directly execute physical actions, while the Gemini Robotics-ER model focuses on advanced spatial understanding and embodied reasoning, allowing roboticists to run their own programs using Gemini’s capabilities.

The Gemini Robotics model is built on the foundation of Gemini 2.0 and is designed to be general, interactive, and dexterous. It can adapt to new situations, understand and respond to conversational language, and perform complex tasks requiring fine motor skills. The model has been trained to control various robot types, including the humanoid Apollo robot developed by Apptronik. This adaptability allows it to perform tasks across different environments, from homes to workplaces.

In addition to Gemini Robotics, Google DeepMind has also introduced Gemini Robotics-ER, which focuses on advanced spatial understanding and embodied reasoning. This model allows roboticists to integrate it with existing low-level controllers, enhancing the robots' ability to perform tasks with improved spatial reasoning and code generation capabilities.

Google DeepMind is collaborating with companies like Apptronik, Agile Robots, Agility Robots, Boston Dynamics, and Enchanted Tools to further develop and test these models. The company is also releasing a new dataset to evaluate and improve semantic safety in embodied AI and robotics, ensuring that these advanced models operate safely in real-world scenarios.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like Robotics Brief.

Also, consider following our LinkedIn page AI & Robotics.

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates