Holy shit, robots can Google stuff now. Google DeepMind just dropped Gemini Robotics 1.5 and Robotics-ER 1.5, and I’m not sure we’re ready for the implications. These aren’t your typical “pick up red block” demo bots — we’re talking about machines that can plan multiple steps ahead, search the web for information, and actually complete complex real-world tasks.
The breakthrough here is in what DeepMind calls “genuine understanding and problem-solving for physical tasks.” Instead of robots that follow single commands, these models let machines think through entire workflows. Want your robot to sort laundry? It’ll separate darks and lights. Need help packing for London? It’ll check the weather first, then pack accordingly. One demo showed a robot helping someone sort trash, compost, and recyclables — but here’s the kicker: it searched the web to understand that location’s specific recycling requirements.
The technical setup is elegant in that “why didn’t we think of this sooner” way. Gemini Robotics-ER 1.5 acts as the planning brain, understanding the environment and using tools like Google Search to gather information. It then translates those findings into natural language instructions for Gemini Robotics 1.5, which handles the actual vision and movement execution. It’s like having a research assistant and a skilled worker collaborating seamlessly.
But the real game-changer might be the cross-robot compatibility. Tasks developed for the ALOHA2 robot (which has two mechanical arms) “just work” on the bi-arm Franka robot and even Apptronik’s humanoid Apollo. This skill transferability could accelerate robotics development dramatically — instead of starting from scratch with each new robot design, we’re looking at a shared knowledge base that grows with every implementation.
“With this update, we’re now moving from one instruction to actually genuine understanding and problem-solving for physical tasks,” said DeepMind’s head of robotics, Carolina Parada. The company is already rolling out Gemini Robotics-ER 1.5 to developers through the Gemini API in Google AI Studio, though the core Robotics 1.5 model remains limited to select partners for now.
Look, I’ve written about enough “robot revolution” announcements to be skeptical (and you should be too). But this feels different. We’re not talking about theoretical capabilities or lab demonstrations that fall apart in real conditions. This is about robots that can adapt to new situations, research solutions independently, and transfer knowledge across completely different hardware platforms. The mundane applications alone — from warehouse automation to elderly care assistance — represent a fundamental shift in what we can expect machines to handle autonomously.
The question isn’t whether this technology will change industries. It’s how quickly we can scale it up and what creative applications emerge when robots can finally think beyond their immediate programming.
Read more from The Verge.
Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan
المعلومات
- البرنامج
- معدل البثيتم التحديث يوميًا
- تاريخ النشر٢٥ سبتمبر ٢٠٢٥ في ٨:٤٦ م UTC
- التقييمفاضح