
Most of Google’s AI usage comes from AI Overviews in Search, said Google CEO Sundar Pichai in a wide-ranging series of announcements during the Google I/O keynote today in Mountain View, California. he stated. This year’s I/O presentation shows Google trying to dominate the hot AI assistant landscape, from AR glasses that can answer questions about objects in view to collapsing the difference between search engines and generative AI queries.
The best of Google’s offerings don’t come cheap: Gemini subscription plans will now be split into a Pro and Ultra plan. The $19.99 AI Pro plan brings a suite of products and higher rate limits than the free version. The pricey ($249.99) Ultra plan includes the highest rate limit and early access to products like the upcoming Gemini 2.5 Pro with deep reasoning, plus the full suite of AI products such as the moviemaker Flow.
AI Mode Search Experience further melds search engines with generative AI
Starting today, Google Search will offer an AI Mode tab that outsources web search to Gemini, pulling from across the web to answer multi-part questions. At the I/O presentation, Pichai called AI Mode “the total reimagining of search.”
More users are asking longer questions of Google Search since the inclusion of AI Overviews, he said. That shift to longer questions could mean a gradual cultural shift in the way users interface with the web, thinking not of keywords but of the kind of queries generative AI are best suited to.
AI Mode will be available for free in the US starting May 20, with a gradual rollout finishing in a matter of weeks.

“Over time, we’ll graduate many of AI Mode’s cutting-edge features and capabilities directly into the core Search Experience,” said Google Vice President of Search Liz Reid at Google I/O.
Essentially, Google is now saying only AI can effectively do what its search engine once did: index the entire web.
AI Mode can chain together searches and offer personalized suggestions if the user chooses to connect apps like Gmail. Google proposes AI Mode could be a one-click spot for local services, restaurant reservations, shopping, and more.
Google Search will also hook to the multimodal AI Project Astra. In AI Mode or in Google Lens, users can point the camera and speak to the AI to receive answers about the world from Search.
Google’s AI products processed 480 trillion tokens monthly this year, a 50 times increase over last year, said Pichai. More than 7 million developers have used the Gemini AI in Google AI Studio, and the Vertex AI and Gemini app has more than 400 million monthly active users, Pichai said.
Meanwhile, ChatGPT had 400 million weekly active users in January 2015, 175 million of whom use the mobile app, according to Andreesen Horowitz.
Updated version of Gemini 2.5 includes deep reasoning
A reasoning mode for Gemini 2.5 Pro, Deep Think, is on its way. Like competitor reasoning models, Deep Think takes more time to consider a more in-depth answer compared to the base model. It will be available in the Gemini API to select testers only before the public release at an undisclosed time.
“We’re taking extra time to conduct more frontier safety evaluations and get input from security experts,” said Google DeepMind CEO Demis Hassabis.
SEE: 55% of business leaders who laid off employees due to generative AI now regret the decision, according to a new study.
Fresh from winning the 2024 Nobel Prize in Chemistry for the AI model AlphaFold2, Hassabis announced both Gemini 2.5 Flash and 2.5 Pro will have updated versions available this summer. Both show better performance and efficiency, Google said. Plus, the Gemini Live API will now support audio-visual input and native audio-out dialogue that can capture nuances of language – including creepy whispers heard at I/O. Plus, both models have been hardened against prompt injection attacks.
Users will be able to customize 2.5 Pro with thinking budgets, previously available only on 2.5 Flash, to restrict the number of tokens the AI can use.
For coding, the AI assistant Jules is coming out of Google Labs and into public beta today. An asynchronous coding agent with GitHib integration, Jules can work on real code bases – a good way to factor in a developer’s entire project, as long as the AI doesn’t introduce any flaws.
Plus, Google introduced a frontier model called Gemini Diffusion, which refines data down from random noise. The goal is for diffusion, which has been successfully deployed in video and audio generation, to generate text more efficiently. Ideally, diffusion will eventually reduce latency in all Gemini models. Interested users can sign up for a waitlist for a demo.
Gemini Live improvements, Imagen 4, Veo 3, and more come to the Gemini app
Google is still chasing the science fiction version of an AI assistant that can seamlessly integrate into any aspect of life. The company’s plan is to make the Gemini app more “personal, proactive, and powerful,” said Google Labs VP Josh Woodward. As such, the company is launching a variety of new features for the Gemini app; some of these features will be under the Project Mariner banner, available with the Gemini Ultra subscription. Google will let Gemini interact with web pages, for example, to make bookings.
Google plans to offer opt-in toggles for connecting Gemini to Gmail and other apps. That way, the assistant can tailor its responses to the user’s interests or even use their vocabulary in generated emails.
Other announcements related to AI assistants from I/O were:
- Gemini Live screen sharing is now free on the Gemini app and iOS.
- Deep Research will let you upload your own files. Soon, you’ll be able to pull information from Google Drive and Gmail.
- Canvas can transform reports into web pages, infographics, or quizzes.
- The Imagen 4 image creator is available in the Gemini app starting today, with improved text generation capabilities.
- Veo 3, the state-of-the-art video model, is available today. It includes native audio generation, sound effects, background sound, and dialogue.
- The Gemini Live camera and screen sharing feature is out on Android OS today.
At Google I/O, the team demonstrated live translation over Google Meet. The AI voice spoke as a translator in English and Spanish with only a slight delay from the original words. AI translation is available in Google Meet starting today, in English and Spanish, for subscribers. More languages will come in the next few weeks, and to Enterprise accounts later this year.
Google also showed off Beam, a video model to transform 2D video scenes and render 3D “light field” displays for more realistic images. The first Beam-compatible devices will be available later this year in collaboration with HP.
Gemini Code Assist benefits from Gemini 2.5 upgrades
In the world of coding, Gemini Code Assist for Individuals, formerly in public preview, is now generally available and linked to Gemini 2.5. The code review agent Gemini Code Assist for GitHub has likewise moved into general availability. Both the free and subscription versions run on Gemini 2.5.
SEE: Google told some workers to return to the office three days a week or risk their jobs.
Flow creates AI-generated video with consistent, editable scenes
Google announced several video tools for professional filmmakers. Flow, a new tool for creatives, combines the capabilities of Veo, Imagen, and Gemini. Prompts can create fully animated scenes with sound. Filmmakers can insert individual items and other elements into scenes, lengthen or shorten shots, and perform other editing tasks entirely through the prompt window. The demonstration at Google I/O featured a giant chicken carrying a car into the sky, leaning into the dreamlike atmosphere of a lot of AI-generated videos.
Flow is available with the Google AI Pro and Google AI Ultra plans in the US.
Google shows off Android XR glasses and partnerships
Lastly, Google demonstrated AI prompts on Android XR glasses and headsets. In a demonstration, the company showed how smart glasses with Gemini could schedule a meet-up at a coffee shop and provide directions, answer questions about pictures hanging on the walls, and remember objects seen earlier in the day. A live demonstration of translating between three languages – with English as an intermediary language both speakers understood – worked with a slight delay for one round of back-and-forth talk but stalled as the conversation continued.

The Android XR system will also be used on Samsung’s Project Moohan, coming later this year. Moohan is a virtual reality headset with an “infinite screen” that combines TV and smartphone functionality.