- Google Gemini Live Now provides visual guidance with real-time signals on your screen when you share your camera
- The feature is designed to help users solve tasks visually and identify objects at sight
- Gemini Live has also expanded its appintegrations and introduced expressive voice upgrades
Google continues his quest to get people to use his Gemini AI assistant at all times and everywhere with a new set of upgrades launched with the Pixel 10 series of smartphones. The midpoint of the new and improved Gemini Live is a set of AI eyes, a feature called visual guidance.
Basically, you can give Gemini Live access to your camera and it will look at the same things you look at and help you find out things like the right tool to use, the best choices to coordinate a clothing or other tasks. The solutions will be right on the screen with arrows or circles around the correct answer. Currently, the feature will only be available on Pixel 10, but other Android phones and even iOS devices will be able to use the feature in the near future.
Visual guidance may sound like a party trick, but it may prove to be a big move for Gemini Live. Instead of receiving a flat, spoken answer when you ask Gemini to help gather a new piece of furniture, you can now show the parts to your camera and have the assistant visually indicate what goes where. It does not require special hardware; It’s like showing a friend who is good at DIY what you have and asking for help.
Google sees it clearly as a way of bridging the awkwardness that sometimes happens when you ask an AI for help, and it gives you vague or excessively generic answers. “Use the blue -handed seaweed,” may not help much if your toolbox has three blue handles. A glowing circle over the right one is much more helpful. As a person who has tried to follow a YouTube tutorial while at the same time I have a screwdriver, I get the appeal.
Sweet talk and multitasking
Gemini Live will also sound better when it shows you things, thanks to new speech models that are able to adjust the tone and even the voice. So Gemini may be able to use a particularly quiet voice to talk about a stressful topic, speed up when you’re busy, or maybe tell you a story about pirates in the stereotypical pirate accent.
Gemini Live will also be better at multitasking thanks to new links to apps such as Google Calendar, Messages and Maps. So when you chat with Gemini, you can get it to handle your personal appointments and send texts to your friends with directions.
Creating Gemini Live fits Google’s wider approach to AI that places it as a surrounding, always-on-platform rather than an independent function. AI helps that are flexible enough for any event while using context to be specifically valuable to individuals is what Google and other AI developers have promised for a while. And while the visual guidance and other tools will not be perfect, adaptability can compensate for it. You don’t have to learn a new system or talk in commands. You just show Gemini what you see, ask what you need and get an answer set to the subject.



