Google has dropped a Gemini update for phones, and it’s a doozy


I had a dripping tap in my kitchen for a couple of weeks recently. It wasn’t a massive problem; just one of those little inconveniences that I’d been putting off dealing with. 

Normally, I’d try and Google the problem to see if there was an easy fix, and maybe even watch a few YouTube videos. But, without the plumbing knowledge to find the right tutorials and videos, it’d be a bit of a slog. A manageable one, but a slog nonetheless.

But then I remembered the new Google Gemini update that had landed on my Google Pixel 9 Pro XL days earlier. So, rather than frantically Googling for a fix, I fired up Gemini Live, tapped the new Camera icon and showed Gemini the problem.

Within a few seconds, it had identified the issue, suggested a likely cause – in my case, a worn rubber washer – and even talked me through a fix. I didn’t need the right search terms, nor even attempt to describe the problem. Gemini could see it – and that’s huge for smartphone AI. 

Gemini Live uses your phone cameras like eyes

Generative AI has come a long way over the past few years. I still remember seeing ChatGPT in action for the first time and being absolutely floored (and a little bit scared) about what it could generate in such a short amount of time. Compared to the ChatGPT 4o experience we now receive, that’s basic. 

Advertisement

But, generally speaking, it’s still largely a description-based experience. Sure, you’ve got elements like ChatGPT’s Voice mode and even Gemini’s regular, conversational Live mode for a more natural back-and-forth, and most AI tools can now analyse photos, audio and files, but they still rely on your instructions and descriptions of what you want. 

Gemini Live camera updateGemini Live camera update
Image Credit (Trusted Reviews)

That’s why the latest Gemini Live update is such a game-changer for smartphone-powered AI. 

It essentially turns your phone’s camera into a live input stream for AI, allowing Gemini to interpret what it sees as you show it. It’s a much more fluid experience than uploading an image and waiting for a response; simply point and ask what the problem could be. 

Now that was handy for my leaky kitchen tap, but that’s only scratching the surface of what a camera-powered AI assistant can help with. 

Reorganisation, troubleshooting and even fashion advice

I work from home for around half of the week, and while my dual-screen setup helps me be productive, it’s far from the prettiest setup around. There are cables and phones everywhere, little trinkets and a small desk fan to keep me cool during those rare hot days. 

Advertisement

Rather than scour Pinterest and Reddit for WFH setup inspiration, I fired up Gemini Live, showed it my setup and asked how I could improve it. It offered suggestions on cable management, talking me through the items I’d need (cable ties or velcro straps) and even where to buy them. 

Gemini Live camera updateGemini Live camera update
Image Credit (Trusted Reviews)

It also suggested a different layout for my screens to better suit the size of my table, and recommended storing all my trinkets beneath one of my displays in a dedicated area. 

Importantly, with each suggestion, it asked me what I thought of it and if I wanted to change it in any way before talking me through the process. The result is a desk that, though still fairly cluttered, is more manageable than it was.  

You could also point the camera at your fridge and ask what you could have for dinner, get home improvement and decoration suggestions, and even find out what the weird bug that landed on your garden table is. 

Google claims it can also help organise drawers and closets, troubleshoot issues like a squeaky chair or a record player that keeps skipping, and even provide fashion advice, showcasing just how versatile the new functionality is. 

Advertisement

OpenAI teased it, but Google shipped it

If this sounds familiar, it should; GenAI competitor OpenAI teased a very similar functionality during the ChatGPT 4o Omni reveal back in May 2024. 

The real-time vision feature was teased and demoed alongside other key improvements, like natural conversation with emotional tones, with the LLM. ChatGPT 4o is now available to ChatGPT users, but there’s a catch – the video functionality isn’t widely available just yet.

Gemini Live camera updateGemini Live camera update
Image Credit (Trusted Reviews)

Gemini Live’s camera functionality, on the other hand, is live and working – even if it didn’t make as big of a fuss about the launch as OpenAI did with its equivalent. And, let’s be honest, while demos are cool, the real winner is the company that can get these features into the hands of consumers first.

And, in this race, it looks like Google has just pulled ahead. 

It’s a Pixel and Galaxy exclusive – for now

Google hasn’t won the race just yet, simply because the rollout of the functionality is limited to the Google Pixel 9 series and Samsung’s Galaxy S25 collection – for now, anyway. 

Advertisement

Google claims that the functionality will be rolling out to all Gemini Advanced subscribers in the near future, regardless of the Android device they’re using – though there’s no solid timeline in place right now.

Still, if you’ve got a recent Pixel or Galaxy device, you can try this next-gen tech right now. Simply fire up Gemini Live, tap the camera icon and start showing Gemini the world around you.

It’s rare that a software update can dramatically supercharge a device, but it feels like Google has achieved just that with the new AI functionality. 



Source link

Previous articleAcer’s ludicrously long-lasting laptop is $300 off today
Next articleBitcoin surges past $97k as bulls build momentum