Table of Contents


Want to Boost Rankings?
Get a proposal along with expert advice and insights on the right SEO strategy to grow your business!
Get StartedGoogle DeepMind has just taken a major step forward in robotics. With the launch of Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, robots can now search the web for help, plan multistep tasks, and even share skills across different machines.
During a press briefing, Google DeepMind explained that these new models represent a shift from robots carrying out single commands to developing genuine problem-solving capabilities in the physical world.
If robots can now Google answers, what stops them from becoming far more capable than we have imagined?
- What Did Google DeepMind Actually Announce?
- How Do Robots Use the Web to Get Things Done?
- Why Is This a Big Leap for Robotics?
- Can Robots Really Learn From Each Other Now?
- What Could This Look Like in Real Life?
- How Are These Models Being Rolled Out?
- Are There Risks With Robots That Search the Web?
- So, Where Does This Leave the Future of Robotics?
Free SEO Audit: Uncover Hidden SEO Opportunities Before Your Competitors Do
Gain early access to a tailored SEO audit that reveals untapped SEO opportunities and gaps in your website.
It feels like we are getting closer to science fiction but this time it is grounded in real technical progress.
What Did Google DeepMind Actually Announce?
During a press briefing, Carolina Parada, head of robotics at Google DeepMind, explained that the new AI models can move beyond one-off instructions.
Until now, robots were great at following simple orders like folding a paper or unzipping a bag. But with this upgrade, they can handle connected, real-world tasks that require several steps.
Think of separating laundry into light and dark clothes, or packing a suitcase based on today’s weather in London.
These are not single actions. They require planning, information gathering, and execution. That is exactly where these models come in.
How Do Robots Use the Web to Get Things Done?
Instead of relying only on what has been pre-programmed, robots can now search the internet to make decisions.
Take waste sorting, for example. Every city has slightly different rules for recycling. Instead of being stuck with a one-size-fits-all program, a robot can now look up the guidelines online for your location and sort accordingly.
Here is how it works.
Gemini Robotics-ER 1.5 acts as the reasoning brain. It searches the web, processes the information and translates it into simple steps.
Then, Gemini Robotics 1.5 takes those steps, using its vision and language capabilities and actually performs the task.
It’s almost like robots are now equipped with a built-in “when in doubt, Google it” feature.
Why Is This a Big Leap for Robotics?
Before this, robots were more like diligent assistants that needed very specific instructions. Tell them to fold a shirt and they do fold it. Ask them to pick something up and they do just that.
But life is not a list of single commands. It is interconnected and constantly changing. With these new models, robots can start to navigate this complexity.
Parada summed it up well:
“With this update, we are now moving from one instruction to actually genuine understanding and problem-solving for physical tasks.”
That’s a big deal. Robots are beginning to understand context, not just commands.
Can Robots Really Learn From Each Other Now?
Yes and this might be the most exciting part. Kanishka Rao, software engineer at DeepMind, explained that the updated models let skills transfer from one robot to another—even if their designs are very different.
For example, a task learned on the ALOHA2 robot with two arms worked just as well on the Franka robot and even on Apptronik’s humanoid Apollo.
Why does this matter? Because instead of programming each robot separately, you can train one, and the rest pick it up automatically.
Imagine how much faster industries could scale robotics with this kind of shared skill network. It is like robots are starting to form their own version of a “classroom,” where one learns and the others instantly follow along.
What Could This Look Like in Real Life?
On paper, sorting laundry or packing a suitcase may sound trivial. But these examples show something bigger: adaptability.
Robots that can fetch context from the web and apply it in the real world could support everything from households and hospitals to warehouses and factories.
They could adjust on the fly to different rules, weather conditions, or customer needs without waiting for a human to reprogram them.
For businesses, this could mean robots that are not just faster, but smarter and able to adapt to different workflows or countries without major rework.
How Are These Models Being Rolled Out?
Google DeepMind is gradually opening access. Gemini Robotics-ER 1.5 is being released to developers through the Gemini API in Google AI Studio.
Meanwhile, Gemini Robotics 1.5 is only available to select partners for now.
This limited release makes sense. These capabilities are powerful but they also need close oversight before becoming mainstream. It is a step forward but one being taken cautiously.
Are There Risks With Robots That Search the Web?
If robots are searching the internet, what happens if they land on bad or misleading information? After all, even humans struggle with misinformation online.
DeepMind’s approach is to use embodied reasoning—the ER model translates web findings into controlled, natural-language steps before anything is executed.
That is a safeguard but the question of trust and safety will continue to loom large as this technology evolves.
It’s clear the models are powerful, but whether businesses and households fully embrace them will depend on proving they are reliable and safe in real-world environments.
So, Where Does This Leave the Future of Robotics?
This update signals a turning point. For years, robots have had strong hardware but limited flexibility. With large language models now embedded, that gap is closing.
Robots are shifting from single-purpose machines into generalized assistants. They can adapt, learn and apply skills across environments. They are not just tools anymore; they are starting to feel like partners.
And while it is still early, Gemini 1.5 is restricted, experiments are ongoing and the direction is clear. Robots that can search, reason and share knowledge are coming.
The only real question is: are we ready for them?
Key Takeaways
- Google DeepMind launched Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, bringing web search and multistep reasoning to robots.
- Robots can now handle tasks like packing a suitcase for London weather or sorting recycling using local guidelines.
- Skills learned on one robot can transfer to completely different robots, speeding up development.
- Gemini Robotics-ER 1.5 is available through the Gemini API, while Gemini Robotics 1.5 is limited to select partners.
- This marks a shift from single-task execution to context-aware problem-solving in robotics.
- The future points toward robots as adaptive assistants but safety, reliability and oversight remain critical.
About the author
Share this article
Find out WHAT stops Google from ranking your website
We’ll have our SEO specialists analyze your website—and tell you what could be slowing down your organic growth.
