Want proof that Google has really gone all-in on AI? Then see no further than today’s Google I/O 2025 Keynote.
Forget Android, Pixel devices, Google photos, maps and all the other Google Staples -none of it was to see. Instead, the full two-hour keynote speaker all the time spent taking us through Gemini, VEO, Flow, Beam, Astra, Imagen and a lot of other tools to help you navigate the new AI landscape.
There was a lot to take in, but quiet-we are here to give you the essential round-up of everything that was announced on Google’s Big Party. Read on for the highlights.
1. Google -Search got its greatest AI upgrade yet
‘Googling’ is no longer standard in the Chatgpt era, so Google has answered. It has launched its AI mode to search (formerly just an experiment) to everyone in the United States, and that’s just the start of its plans.
Within the new tab AI mode, Google has built several new laboratory tools that it hopes will prevent us from jumping ship to Chatgpt and others.
A ‘Deep Search’ mode allows you to set it to work on longer research projects, while a new ticket -purchase assistant (operated by Project Mariner) will help you score access to your favorite events.
Unfortunately, the less popular AI listings also get a wider roll -out, but one thing is for sure: Google search will look and feel very different than now.
@Techradar ♬ Original Sound – Techradar
Shopping online can go from easy to chaotic in moments considering the huge amount of brands, retailers, sellers and more – but Google aims to use AI to streamline the process.
This is because the aforementioned AI mode for search now offers a condition that responds to shopping-based prompts, such as ‘I am looking for a sweet purse’ and serving products and images for inspiration and allows users to narrow large products; That’s if you live in the US as the condition rolls out there first.
The most important new feature of the AI-powered shopping experience is a try-on mode that allows you to upload a single image of yourself, from which Google’s combination of its shopping graph and Gemini AI models then allows you to practically try clothes.
The only warning here is that the Try-on feature is still in the experimental phase and you have to sign up for the ‘Search Labs’ program to give it a try.
Once you have the product or clothing in mind, Google’s Agentic Checkout feature will basically buy the product on your behalf using payment and delivery information stored in Google Pay; That is, if the price meets your approval – which you can set the AI technology to track the cost of a particular product and only make it buy it if the price is right. Nice.
3. Beam could reinvent video call
Video calls are trajectory for many people’s lives, especially if you work in an office and spend 60% of your time in such calls. But Google’s new beam could make them much more interesting.
The idea here is to present calls in 3D, as if you are in the same room as someone when you’re on a call with them; A little as with VR. However, there is no need for a VR headset or glasses here, with beam instead of using cameras, microphones and – of course – AI to work its magic.
If it all sounds pretty well known, it’s because Google has teased this before, under the name Project Starline. But this is no longer a far away concept, as it is here, and almost ready for people to use.
The reservation is that both callers will have to sit in a tailor-made condition that can generate the 3D reproductions needed. But the whole thing is pretty impressive anyway, and the first business customers will be able to get the kit from HP later in 2025.
4. VEO 3 just changed the game to AI -Video
AI video generation tools are already incredibly impressive as they didn’t even exist a year or two ago, but the Google New VEO 3 model looks like taking things to the next level.
As with Sora and Pika, the tool’s third generation version can create video clips and then tie them together to make longer films. But unlike these other tools, it can also generate sound at the same time – and synchronize expert sound and vision together.
This capacity is also not limited to sound effects and background sounds because it can even handle dialogue – as demonstrated in the clip above, as Google demonstrated in the I/O 2025 head.
“We’re coming out of the silent era of video cleaning,” said Google Deepmind CEO Demis Hassabis – and we won’t argue with that.
5. Gemini Live is here – and it’s free
Google Gemini Live, the search giant’s AI-driven voice assistant, is now available for free on both Android and iOS. Previously a paid option opens this step AI to a wealth of users.
With Gemini Live you can talk to the generative AI assistant using natural language, as well as use your phone camera to show the thing from which it will extract information to earn related data. Plus, the ability to share one’s phone screen and camera with other Android users via Gemini Live has now been expanded to compatible iPhones.
Google will start rolling Gemini Live for free from today, where iOS users can access AI and its screen sharing features in the coming weeks.
Here is one for all the budding filmmakers out there: By I/O 2025, Google Covers took from flow, an AI-driven tool for filmmakers that can create scenes, characters and other film assets from a natural language text prompt.
Let’s say you want to see doctors perform an operation on the back of a taxi of the 1070s; Well, pop it in flow and it generates the scene for you using the VEO 3 model with surprising realism.
Effectively an extension of the experimental Google Labs VideOFX tool launched last year, flow will be available to subscribers to Google Al Pro and Google Al Ultra Plans in the US, with several countries in the future.
And it can be a tool that will let budding directors and cinematic video manufacturers test scenes and storytelling without having to shoot a lot of clips.
Whether this will improve film production planning or give a whole new era of cinema where most scenes are created using generative AI rather than making use of sets and traditional CGI has not yet been seen. But it seems that flow could open movies that create more than just eager amateurs and Hollywood instructors.
7. Gemini’s artistic abilities are now even more impressive
Gemini is already a pretty good choice for AI image generation; Depending on who you ask, it is either a little better or a little worse than chatgpt, but essentially in the same ballpark.
Well, now it may have moved in front of its rival thanks to a big upgrade to his image model.
For starters, the image 4 brings a resolution increase to 2K – which means that you are better able to zoom in and crop its images or even print them.
What’s more, it will also have “remarkable clarity in fine details such as complicated fabrics, water drops and animal fur, and excels in both photo -realistic and abstract styles,” says Google – and judging by the picture above it looks beautiful.
Finally, image 4 will give Gemini improved skills for spelling and typography, which Bisarr has been one of the most difficult riddles for AI image generators to solve so far. It is available from today, so expect even more AI-generated memes in the near future.
8th. Gemini 2.5 Pro just got a ‘groundbreaking new’ Deep Think ‘upgrade
Improved image capacities are also not the only upgrades that come to Gemini – it also has a dose of extra brain power with the addition of a new deep -minded state.
This basically reinforces the Gemini 2.5 Pro with a feature that means it will effectively think harder on queries that are placed on it, rather than trying to kick an answer as soon as possible.
This means that the latest Pro version of Gemini runs several possible reasoning lines in parallel before deciding how to respond to an inquiry. You could think of it as AI that looks deeper into an encyclopaedia, rather than wing it when you come up with information.
There is a catch here, as Google only rolls out deep -thinking state for trusted testers for now – but we wouldn’t be surprised if it soon got a much wider release.
9. Gemini AI Ultra is Google’s new ‘VIP’ plan for AI Obsessives
Would you spend $ 3,000 a year on a gemini subscription? Google thinks some people will because it’s rolled out of a new Gemini AI Ultra plan in the US that costs a full $ 250 a month.
The plan is not aimed at relaxed AI users, of course; Google says it offers “the highest utility boundaries and access to our most skilled models and premium features” and that it will be a must if “you are a filmmaker, developing, creative professional or simply requiring the absolute best of Google Al with the highest level of access.”
On the plus side, there is a 50% discount in the first three months, while the previously available premium plan also sticks around for $ 19.99 a month, but now renamed AI Pro. If you like the sound of AI Ultra, it will soon be available in several countries.
10. Google just showed us the future of smart glasses
Google finally gave us the Android XR exhibition window it has been teasing for years.
At its core, Google Gemini-On-Glas Gemini can find and direct you against cafes based on your food preferences, it can perform live translation and find answers to questions about things you can see. On a headset it can use Google Maps to transport you all over the world.
Android XR comes to units from Samsung, Xreal, Warby Parker and Gentle Monster, though there is no word yet about when they will be in our hands.
11. Project Astra also got an upgrade
Project Astra is Google’s powerful mobile AI assistant who can respond and respond to the user’s visual environment, and this year’s Google I/O has given it some serious upgrades.
We saw when Astra gave a user in real time advice to help him fix his bike and speak in natural language. We also saw Astra argue against wrong information as a user went down the street and felt things around her.
Project Astra comes to both Android and iOS today, and its visual recognition feature is also coming to AI mode in Google search.
12.… Like chrome did
Is there anything that has not received an injection of Geminis AI -Smarts? Google’s Chrome browser was one of the few tools that didn’t have it seems, but that has now changed.
Gemini is now rolling out in Chrome for desktop from tomorrow to Google Ai Pro and AI Ultra subscribers in the United States.
What does that mean? Apparently, you will now be able to ask Gemini to clarify complex information you are investigating, or have them summarize web pages. If that doesn’t sound too exciting, Google also promised that Gemini will eventually work across multiple tabs and also navigate on sites “on your behalf”.
It gives us light hall vibers (“I’m sorry, Dave, I’m afraid I can’t do it”), but at the moment it seems that Chrome remains mute enough that we can be considered worthy of serving it.
13.… and so did Gemini canvas
As part of Gemini 2.5, canvas-it has so-called ‘creative spaces inside the Gemini app-has a boost via the new upgraded AI models in this new version of Gemini.
This means that canvas is more skilled and intuitive, with the tool capable of taking data and prompts and transforming them into infographics, games, quizzes, web pages and more within minutes.
But the right kicker here is that canvas can now take complex ideas and transform them into work code at speed and without the user needing to know specific coding language; All they have to do is describe what they want in text prompt.
Such abilities open up to ‘vibe coding’ where you can create software without having to know any programming language, and it also has the ability to prototype new ideas for apps with speed and just through prompt.