- Experts warn a single calendar input can silent hijack your smart home without your knowledge
- Scientists proved that AI can be hacked to control smart homes by using words only
- To say “Thanks” triggered Gemini to turn the lights on and boil water automatically
The promise of AI-integrated homes has long included convenience, automation and efficiency, but a new study by researchers at Tel Aviv University has postponed a more disturbing reality.
In that, there may be the first known examples of a successful AI-prompt injection attack, the team manipulated a gemini-driven smart home using nothing but a compromised Google calendar input.
The attack took advantage of Gemini’s integration with the entire Google ecosystem, especially its ability to access calendar events, interpret natural languages PROMPS and control connected smart devices.
From planning to sabotage: utilization of everyday AI -Access
Gemini, although limited in autonomy, has enough “agent capacities” to perform commands on smart home systems.
This connection became a responsibility when the researchers deployed malicious instructions into a calendar agreement, masked as a regular event.
When the user later asked Gemini to summarize their schedule, it unintentionally triggered the hidden instructions.
The embedded command included instructions for Gemini to act as a Google Home agent that is dormant until a regular phrase as “thank you” or “secure” was written by the user.
At that time, Gemini activated smart devices such as lights, shutters and even a boiler, none of which had approved at that moment.
These delayed triggers were particularly effective at bypassing existing defense and confuse the source of the actions.
This method, called “Promptware”, raises serious concerns about how AI -Bason surfaces interpret user input and external data.
The researchers claim that such rapid injection attacks represent a growing class of threats that mix social technique with automation.
They demonstrated that this technique could go far beyond controlling devices.
It can also be used to delete appointments, send spam or open malicious sites, steps that can lead directly to identity theft or malware infection.
The research team coordinated with Google to reveal the vulnerability, and in response, the company accelerated the roll -out of new protection against quick injection attacks, including extra control for calendar events and additional affirmations of sensitive actions.
Still, there are still questions about how scalable these corrections are, especially as Gemini and other AI systems get more control over personal information and devices.
Unfortunately, traditional security suites and firewall protection are not designed for this type of attack vector.
To remain in safety, users need to limit which AI tools and assistants that Gemini can access, especially calendars and smart home control.
Also, avoid storing sensitive or complex instructions in calendar events and do not let AI act on them without supervision.
Pay attention to unusual behavior from smart devices and disconnect if something seems to be off.
Via the cable



