Google’s Project Astra Could Be an iPhone Moment for Smart Glasses



Key Takeaways

  • Google’s Project Astra could revolutionize smart glasses with AI assistant capabilities.
  • Future glasses could control all phone apps with voice commands, freeing us from constantly using our phones.
  • Potential privacy features like indicators for recording and driving restrictions are crucial for the next generation of smart glasses.


Google recently shared its Project Astra, an early demo of what a future AI-powered universal assistant could look like. Near the end of the demo, the Google employee puts on a pair of glasses and seamlessly continues to converse with the assistant. I think smart glasses have just found their killer feature.


With Project Astra, the Time Is Right for the Next Generation of Google Glass

By now, Google Glass is an old piece of tech, but we can all say that it was ahead of its time. The thing is, back in 2013, the tech that could make smart glasses actually useful just wasn’t there. But with Project Astra, Google Glass could finally have its iPhone moment.


Ethical issues aside, if Google and DeepMind manage to rein in hallucinations, not fully but just enough to make the foundational model behind the assistant reliable enough when it matters, we could have a proper AI assistant at our hands and on our phones. However, I think that the perfect way to use an assistant you can freely converse with is not with your phone, it’s with your voice.

One option here is to use a smartwatch, but you’d have to move your arm to your face every time you want to issue a command or ask the future version of Gemini something. But a pair of lightweight, smart glasses that feature a camera for your AI assistant to see the world you’re seeing and a microphone to hear you, could be a game-changer—a killer feature smart glasses have been looking for ever since 2013. The next generation of Google Glass could herald a new era of personal computing.

Just imagine wearing smart glasses equipped with a HUD advanced enough to show you messages, appointments, Google search results in textual form, the song you’re listening to as well as the rest of the playlist, and other text-based information. This could be a shakeup the world of personal computing needs. Smartphones have turned into commodities anyway, so why not put them in our pockets and let them stay there?


Smart Glasses Could One Day Be the Only Tech You Need

While the Project Astra demo is indeed impressive, Google and DeepMind have many issues to solve—whether those issues are solvable at all is still an open question—and there are numerous advancements to make before releasing the next generation of the Gemini AI assistant to the public. But, the end result could finally free us from our phones. For real this time. Five years from now, everyone could be wearing glasses instead of always checking their phones.

This AI assistant of the future shouldn’t just answer your questions, it should also be able to operate your phone apps on your behalf. I’m not just talking about Google apps, I’m talking about all apps. Spotify, the camera app, the weather app you’re using, your favorite food delivery app, Instagram, Slack, the rest of your chat apps, the whole nine yards.


When you think about it, it makes sense. If you’ve got access to a highly versatile, universal AI assistant that’s with you all the time, the said virtual assistant should be able to handle every single app you’ve got on your phone. Besides, if you’re already wearing smart glasses, you wouldn’t want to constantly juggle between multiple devices while on the move.

Sure, I don’t mind picking up my phone to check out a new message while typing this text, but when I’m out and about listening to music while wearing my smart glasses connected to my phone, I don’t want to take out my phone to change the song, see when the next game of my favorite sports team starts, perform a quick Google search, check the working hours of the nearby fast food joint, or type a quick text message. If I already have an AI assistant on my phone, and I’m fine with it doing things for me, I’d like it to handle all the phone-related tasks on my behalf.

For that to happen, Google will most likely have to create a new kind of multimodal foundation model. One that can understand voice and type commands—many people prefer typing to speaking—and then use those commands to control your phone apps for you.


This future AI model would be a sort of Gemini subroutine that communicates with it and executes commands based on your voice commands. Since this kind of model doesn’t have to be as versatile as current LLMs, it could reside on your phone and not require the cloud to work, solving the privacy issue. Of course, there’s a lot more to talk about privacy here, but that’s beyond the scope of this article.

Also, one day when the tech becomes advanced enough, you could have Gemini and its app-controlling subroutine connected to your Google Glass 3.0 or whatever they end up being called, and the glasses could feature a proper AR-capable display, straight from the future. We could not only fully interact with our phones thanks to the app-controlling assistant subroutine, we could also watch videos, scroll our Reddit and Instagram feeds, take photos and videos with our glasses, and never take our phones out of our pockets.


Our phones could transform into a sort of “computing station” with our glasses taking the center stage. Smartphones will still be the engines powering all that computing, but they will do that behind the scenes.

Google Should Equip Future Smart Glasses With Certain Privacy Features

One of the most talked-about issues with the original Google Glass back in the day was just how privacy invasive they were. Back in 2013, people weren’t accustomed to other people wearing cameras on their heads and constantly recording with them. But a lot has changed since then—for better or worse.

Nowadays, you’ve got GoPros everywhere, thousands of live streams of people walking around the biggest cities of the world, everyone is constantly taking selfies and filming themselves in public, social network feeds are filled to the brim with public videos and photos, and there’s a good chance that every step you take outside your home—and for many of us, even inside—is being filmed by the glut of security cameras found on every corner.


And let’s not forget that many of us have gotten used to large corporations using our data for advertising or other purposes, which is a sad but undeniable reality. If Google Glass debuted in 2024, no one would bat an eye.

Still, if Google is planning to drop the next generation of Google Glass on us, the company should take a page from Humane—for all its faults the AI Pin has a pretty solid grasp on privacy—and equip the smart glasses with certain privacy features.

For example, the frame, or part of it, could light up when taking photos and videos, and there ought to be a requirement to hold the frame while recording audio. Next, I’m certain that being able to watch your social media feed while driving would be super alluring for some people, so Google should use its context-aware AI assistant to disable the HUD while driving. The virtual assistant should only be able to read to you your incoming texts and emails, allow you to reply, and nothing more. I’m also certain that many companies will forbid using glasses while on their premises, considering industrial espionage and all that jazz. But as long as you wear them, others should be aware if they’re being recorded.


While it’s obvious that Google’s Project Astra is still in the early stages, the potential is definitely there. If we do get a super versatile AI assistant, I can’t see any other future than one where we’re all bespectacled.



Source link

Previous articleLeak Reveals An ETF Perfect Storm Could Be Heading Toward Bitcoin After $6 Trillion Fed Inflation Flip Unleashed A Crypto Price Boom
Next articleDoorbuster deal drops the price of this Lenovo laptop from $2,009 to $779