Google AI Overviews is a feature within Google Search that provides AI-generated summaries and insights at the top of search results. It is powered by Google’s custom artificial intelligence model, Gemini, and is designed to help users quickly find detailed and contextually relevant information from a variety of online sources.

The following guide is designed to help you understand Google AI Overviews, how it works and how it aligns with Google’s Search Generative Experience initiative.

Understanding Google AI Overviews

Google AI Overviews was previously part of SGE, a feature within Google Search that allowed users to tap into Google’s generative AI technology for text and image-based searches. This has since evolved into a broader experiment that expands Google’s generative AI capabilities to more parts of the search experience.

Google AI Overviews leans on the software giant’s custom large language model, Google Gemini, to provide snapshot-type answers to users’ search queries (hence “overviews”). Because Gemini is integrated into Google’s core web ranking system, AI Overviews can search and pull relevant information from Google’s own index.

The system is designed to make it easier for users to find the information they need quickly without having to scour through multiple sources. For example, if you’re looking for information on climate change, AI Overviews can provide a summarized response with key facts and statistics, along with links for further reading. In doing so, AI Overviews reduces the effort users need to put into searching for information by doing all the legwork for them — or so the theory goes.

Key features of Google AI Overviews

Multi-step reasoning

AI Overviews can handle complex, multi-part user queries by linking together related information and provide more nuanced responses as a result. For example, you could search “Show me the best gyms within 30 minutes’ drive with no joining fees,” and AI Overviews will show you nearby gyms, their distance from you and any relevant sign-up offers (Figure A).

Likewise, asking the prompt “What are good options for a day out in Dallas with the kids? Recommend some ice cream shops near each option” might see AI Overviews respond with a list of family-friendly ideas, each accompanied by nearby ice cream shops and a map showing their locations.

Google search results displayed on a mobile phone.
Figure A: AI Overviews can serve up more contextually relevant information than traditional Google searches. Image: Google

Planning and brainstorming aids

Beyond answering questions, AI Overviews can help users plan activities or gather ideas for projects by aggregating relevant information and resources. For example, you could search “Create a five-day high protein meal plan that’s easy to prepare,” and AI Overviews will aggregate recipes from across the web to give you a jumping-off point. From there, you can customize responses, such as asking for vegetarian alternatives, and add the necessary ingredients to your shopping list.

Starting this summer, users will also be able to use AI Overviews for trip planning. This was shown off by Sissie Hsiao, vice president at Google and general manager for Gemini experiences, during Google I/O.

During a demo, Hsiao gave the example prompt: “My family and I are going to Miami for Labor Day. My son loves art, and my husband really wants fresh seafood. Can you pull my flight and hotel information from Gmail and help me plan the weekend?” Gemini will then pull together information from Maps, Search and Gmail to provide a customized itinerary, factoring in things like flight schedules and the distance of the hotel from suitable nearby dinner spots.

Google plans to add more customization options capabilities later in the year, including more comprehensive recommendations for less specific prompts. For instance, “Anniversary celebration dinner places Dallas” might highlight venues with more romantic ambience or — weather-permitting — spots with rooftop dining.

SEE: Artificial Intelligence: Cheat Sheet

Video-based search

In the future, AI Overviews will be capable of understanding and responding to queries uploaded in video form; this means users will be able to shoot a video of something, ask a question about it and have AI Overviews help them out. Google says this will make it easier to find answers to problems without the need to type out detailed text descriptions (Figure B).

Voice prompt interface while a video is being captured on a smartphone.
Figure B: Shoot a video and ask a question; AI Overviews will soon be able to respond. Image: Google

AI Overviews vs Google SGE

Google introduced generative AI to its search platform on October 12, 2023 as part of SGE. It was an experiment by Google Search Labs that initially allowed users to generate AI-powered images and text directly from the search bar. Its aim was to provide more creative answers to questions that traditional search results might not fully address.

SGE has since become AI Overviews, which began rolling out to users in the U.S. on May 14 following the Google I/O 2024 conference. The feature will be expanded to users worldwide throughout the remainder of the year. Google’s aim is to make AI Overviews available to more than a billion people by the end of 2024.

How to access, customize or disable AI Overviews

To access AI Overviews, just perform a regular search on Google. The AI-generated overview will show up at the top of the search results if it’s relevant to your query, much like a knowledge panel.

There’s currently no default option to disable AI Overviews entirely. A Google support page notes, “AI Overviews are part of Google Search like other features, such as knowledge panels, and can’t be turned off.” In which case, the best you can do is ignore them and focus on the traditional search results instead. If you prefer the usual list of links, you can switch to the Web tab at the top of the Google Search results page. Alternatively, you can go down the custom web extension route, in which case we advise caution.

Regarding customization, Google plans to introduce options allowing users to adjust the complexity of the language used in AI Overviews or to expand on the results it provides. Users will be able to enter a prompt and select between the original answer, a simplified version or the option to break it down into more detail. This should make AI Overviews more useful for a wider range of users, from novices to experts.

AI Overviews challenges and user response

The rollout of Google AI Overviews hasn’t been smooth sailing. In late May, Google was prompted to revisit the feature after it threw out some questionable results. Google attributed the issues to its AI model misunderstanding queries or nuances in language, as well as gaps in available quality information such as advice on how many rocks one should each per day. (Note: TechRepublic strongly advises against eating rocks.)

In response, Liz Reid, head of Google Search, said the company would “keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases.” Among the improvements include refining AI Overviews to better interpret satirical content and nonsensical queries and adding restrictions to prevent the AI from triggering in situations where it can’t provide useful information.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays