- OpenAI’s new “apps” feature allows ChatGPT to connect to external services such as email and storage
- Radware discovered “ZombieAgent”, a prompt injection flaw that allows hidden commands to exfiltrate or propagate data
- Exploits include zero-click, one-click, persistence, and worm-like propagation; OpenAI patched on December 16th
OpenAI recently introduced a new feature to ChatGPT, which unfortunately also puts users at risk of data exfiltration and persistent access.
In December 2025, a feature called Connectors finally moved out of beta and into general availability. This feature allows ChatGPT to connect to several other apps, such as calendars, cloud storage, email accounts, and the like – giving users more context and thus better and more relevant responses.
The function is now called ‘apps’, but according to security researchers Radware, the tool also opens up a major vulnerability – rapid injection attacks.
Four methods of abuse
Radware dubbed the vulnerability ‘ZombieAgent’ and in practice it’s not that different from the vulnerabilities we’ve seen in Gemini and other GenAI tools.
Connecting ChatGPT to Gmail, for example, allows the tool to read incoming emails and provide contextual responses about conversations, scheduled calls and meetings, pending invitations, and the like.
However, an incoming email may contain a hidden malicious prompt – something written in white font on a white background or with font size 0. Invisible to the human eye, but still readable by the machine.
If the victim asks ChatGPT to read that email, the tool can execute these hidden commands without the user’s consent or interaction. The commands can be pretty much anything, from exfiltrating sensitive data to a third-party server to using the inbox to spread further.
Radware identified four ways ZombieAgent can be abused – a zero-click server-side attack (the malicious prompt is in the email and ChatGPT exfiltrates data before the user even sees the content), a one-click server-side attack (the prompt is in a file that the user must first upload), achieve persistence (a malicious command designed to be stored in the ChatGPT is used to propagate further, like a worm).
Radware said OpenAI fixed the issue on December 16, but did not explain how.
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



