Google announces ‘AI Mode’ as a new way to use Search


Google today is announcing and beginning testing on an “AI Mode” for Search. This “early experiment” lets you ask more complex questions that will be answered by an AI-generated response that takes up the entire page.

According to Google’s user testing, people like AI Overviews. It results in both additional lookups and longer, more unique queries as people discover that Search can now answer more types of questions thanks to AI. 

To that end, the company says “power users” want AI-powered responses for more types of searches. So much so that Google has noticed people appending “AI” to the end of queries. This is why Google Search is adding an AI Mode, which compared to AI Overviews offer “more advanced reasoning, thinking, and multimodal capabilities.” 

Besides always getting a direct answer, AI Mode lets you ask more “nuanced,” multi-part questions that might have previously taken several searches.

Advertisement – scroll for more content

On desktop and mobile, you enter a query into Google as you normally do and then tap the new “AI Mode” button that appears alongside the filters for All, Images, Videos, etc. There’s also a “Go deeper” shortcut at the bottom of AI Overviews, while dedicated entry points to AI Mode let you skip the usual results and just interact with the new experience. Your query is listed at the top of the page with the generated result appearing underneath.

Instead of a top Search bar, you’ll find a chat-style “Ask a follow up…” field at the bottom. On mobile, you can upload images and speak your query to AI Mode, though you’ll only get text output at the moment. A thread history will let you see past searches.

In the example below, the query is: “how do migrating birds know where to go.” At a high-level, AI Mode will think (or make a plan), perform multiple searches, and organize the results. On mobile, a carousel about what sites were sourced appears before giving a concise answer. There’s then another carousel of articles, with the specific aspects (in this case) listed after that.

AI Mode is powered by a custom version of Gemini 2.0 with access to real-time sources and information about the real world, like the Knowledge Graph and Shopping Graph details on billions of products. Google is performing multiple related searches at the same time and brings those results together. This “query fan-out” technique finds information across subtopics and multiple data sources before synthesizing.

The model learns how to verify and back up what it says. If there’s not enough information, users will be directed to web results. Responses are meant to be factual and objective without personality or expressing opinion.

In the second example, someone asks: “when is the best time this week to schedule an outdoor engagement photoshoot in the Boston public garden.” Google answers with a specific day because of the ideal weather conditions (real-time info) and recommends golden hour, while noting what time the sun will set.

In the final example, you can see how AI Mode differs on mobile and desktop: “what’s the difference in sleep tracking features between a smart ring, smartwatch and tracking mat.”

The answer is in the form of a comparison table (which is not yet live today, but coming soon), while you can also follow-up questions at any time, like “what happens to your heart rate during deep sleep.”

There are no 10 blue links in the traditional sense, but you’ll find inline sources, as well as the side card on desktop. In the future, Google is adding a rich carousel of links. Google says AI Mode is meant to connect users to the web, and found that people fundamentally want to verify information and see various sources alongside AI answers. The underlying model is tuned to know how to link to sources.

From early testing, AI Mode queries are twice as long as traditional searches, with people following up a quarter of the time. 

More broadly, Google finds that people know when to use regular Search and when to use AI Mode. These two experiences are meant to coexist, with Google today updating AI Overviews with Gemini 2.0 in the US when you ask complex coding, advanced math, and multimodal (like Lens) queries. That list will expand over time, while all other queries use the existing model. Google says Gemini 2.0 allows for “faster and higher quality” responses, with AI Overviews appearing “more often for these types of queries.” Meanwhile, Google is also expanding AI Overviews globally, including how you don’t need to sign in.

Starting today, Google One AI Premium ($19.99 per month) subscribers in the US will be invited (via email) to opt-in to AI Mode in Search Labs. You can also join the waitlist here. The goal over time is to let anyone use this experience, but testing right now is catered towards power users. 

FTC: We use income earning auto affiliate links. More.



Source link

Previous articleYouTube Premium Lite Gives You Ad-Free Videos at a Lower Price
Next articleThe Minecraft Live show is returning with game and movie updates