Google’s AI robots are learning from watching movies – just like the rest of us



Google DeepMind’s robotics team is teaching robots to learn how a human intern would: by watching a video. The team has published a new paper demonstrating how Google’s RT-2 robots embedded with the Gemini 1.5 Pro generative AI model can absorb information from videos to learn how to get around and even carry out requests at their destination.

Thanks to the Gemini 1.5 Pro model’s long context window, training a robot like a new intern is possible. This window allows the AI to process extensive amounts of information simultaneously. The researchers would film a video tour of a designated area, such as a home or office. Then, the robot would watch the video and learn about the environment. 





Source link

Previous articleHere’s What Bitcoin Needs To Do To Confirm a Breakout From the Current Downtrend, Says Analyst
Next articleTrump at the Bitcoin Conference 2024: An Electoral Strategy?