I’ve never owned another smartphone other than an iPhone until this year. However, as AI moves on to any technical product on the planet, I had to try Android to understand the differences between artificial intelligence in the two ecosystems.
After using a Samsung Galaxy S25 for a few weeks, I returned to my iPhone 16 Pro Max. Not because it was better, but because the ecosystem in which you have built your life corresponds to the decisive factor when it comes to choosing between flagship smartphones.
When I was back on iOS, I found that I was missing a specific AI feature more than others, and without access on iPhone, I quickly defended back with an Android device.
The AI feature I am talking about is Gemini Live and while you could access it on iOS, the experience was dumbed. It was until yesterday, on Google I/O 2025, when Google announced that all Gemini Live’s capabilities are rolling out on the iPhone and at no cost.
Here’s the reason why Gemini Live is the best AI tool I’ve ever used and how adding all its options to iPhone means I’m ready to jump back to Apple.
Which visual intelligence would be
Gemini Live already existed in the Gemini app on iOS, but it lacked two crucial elements that make the Android version so much better. First, Gemini Live on iOS couldn’t be able to access your iPhone’s camera, and secondly, it couldn’t see what you did on your screen. I/O 2025 changed all that.
Now iPhone users can give Gemini live access to their camera and screen, allowing new ways to interact with AI that we haven’t really looked at iOS before.
Gemini’s camera capacity is on your own, if not the best AI tool I’ve used to date, and I’m excited iPhone users can now experience it.
What is Gemini Live’s camera function? Imagine a better version of what Apple wanted visual intelligence to be. You can simply show Gemini what you look at and ask questions without having to describe the subject.
I have found Gemini Live’s camera functionality thrive in situations such as cooking. I used it last week to produce Birria Tacos, and not only did I get advice on every step on the road, but it was also able to see everything I did and help lead me to a delicious dinner.
Not only to support my S25 on a tripod gave Gemini Live the perfect angle, but because it can connect to Google Apps, I could ask it to get information about a recipe directly from content cabinets video. No need to constantly touch your phone with dirty hands in the kitchen and no need to once control a recipe anymore. Gemini Live can do it all.
An AI -Ledes Case every step on the road
Screen -Sharing gives Gemini Live to see what’s on your screen at any time, so you can ask questions related to images, something you are working on or even how to make a puzzle in a game. It is seriously cool, similar to Apple Intelligence-driven Siri, we were promised but never received back on WWDC 2024.
Gemini Live’s full free rollout has just started, so we don’t yet see how this functionality works on iOS. That said, if it works half as well as it does on Android, this will be a feature I could see that many people fall in love with.
Gemini Live and its several ways of interacting with the world unlock completely AI on a smartphone, and now that iPhone users can also access it, I have no reason not to return to the Apple ecosystem.
@Techradar ♬ Original Sound – Techradar



