Google’s Gemini Live, the AI feature that lets people have natural conversations with its AI assistant, is rolling out to more people. Initially, the feature was only available for Pixel 9 and Samsung Galaxy S25 users, but it will soon be available to all Android devices.
The exclusivity of the update was to make sure that Google could test the features with a smaller group, gather feedback, and make improvements. The early response was very positive, so Google decided to speed up the rollout and release it to more people.
For those who didn’t know, Gemini now includes camera and screen sharing, making it much more useful. This upgrade changes Gemini Live from a simple text-based tool into a smart visual assistant that can understand and respond to real-world situations.
The rollout should happen over time, but it is a big step because it means many more people can use these advanced features. I still haven’t gotten access, but since I have an Android, it should come to me soon. While this was limited to a few people, now everyone can use these features for free. Android devices only need to have the Gemini App; users do not need a Gemini Advanced subscription.
The camera and screen-sharing features in Gemini Live are useful in many ways. With camera sharing, users can get real-time help with tasks like organizing a messy room or fixing a broken device. Gemini Live looks at what the camera sees and gives suggestions based on what it notices. For example, you can point your camera at a cluttered drawer, get organization tips, or at a broken appliance and receive step-by-step repair advice.
Screen sharing was previously available on PC. Gemini Live can pull information from websites, give quick summaries, or compare products. Still, I wouldn’t expect it to work perfectly. I used it on my computer, but it didn’t perform as well as expected. I tried to get it to help me review my son’s schoolwork, but it was often incorrect. It was just text, but for some reason, it still couldn’t get it right. So, be prepared for issues or common problems when working with it. This is still being worked out on multiple phones and probably needs a lot of patience.
The AI is also meant to act like a personal shopping helper, looking at online listings and suggesting your best options. You can also use screen sharing to get feedback on creative work, like blog posts or social media posts. Combining camera and screen sharing makes things even smoother, like showing Gemini your clothes while shopping online to find matching items.

Related
Google’s Gemini 2.5 Model Family Is Already Here
It hasn’t been too long since Google released its Gemini 2.0 family of models, but the company is already moving ahead with what’s next. Google has just announced the Gemini 2.5 family, starting with Gemini 2.5 Pro. It seems rushed, but we’ll allow it.
Right now, these features are not available for iOS users, but many are waiting for them to arrive. The delay is likely because Google is still working on making the features work well on iPhones. Google hasn’t shared a clear timeline for when iOS users will get access, so they’ll have to wait for now.
Microsoft’s Copilot Vision offers similar visual help but requires a paid Copilot Pro subscription on mobile and is only available in the US. This makes Gemini Live’s wider Android availability a strong advantage, putting Google in a good position in the fast-growing world of AI-powered mobile assistants.
It will take some time to get everywhere as it’s being rolled out. Since every Android will get it, expect it to come in phases to different countries, but it likely will be everywhere by the end of next week.