- Tenable says it found seven rapid injection flaws in ChatGPT-4o, dubbed the “HackedGPT” attack chain
- Vulnerabilities include hidden commands, memory persistence, and security bypasses via trusted wrappers
- OpenAI fixed some issues in GPT-5; others remain, prompting calls for stronger defenses
ChatGPT has a number of security issues that could allow threat actors to insert hidden commands, steal sensitive data and spread misinformation into the AI tool, security researchers say.
Recently, security experts from Tenable OpenAI’s ChatGPT-4o tested and found seven vulnerabilities, which they collectively named HackedGPT. These include:
- Indirect prompt injection via trusted websites (hides commands on public websites that GPT can unknowingly follow when you read the content)
- 0-click indirect prompt injection in search context (GPT searches the web and finds a page with hidden malicious code. Asking questions may unwittingly force GPT to follow instructions)
- Request injection via 1-click (a twist on phishing where a user clicks on a link with hidden GPT commands)
- Security Mechanism Bypass (wrapping malicious links in trusted wrappers, tricking GPT into showing links to the user)
- Conversation injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads and effectively prompt-injects itself).
- Hides malicious content (malicious instructions may be hidden in code or markdown text)
- Persistent memory injection (malicious instructions can be placed in saved chats, causing the model to repeat the commands and continuously leak data).
Urges to harden the defense
OpenAI, the company behind ChatGPT, has fixed some of the bugs in its GPT-5 model, but not all of them, leaving millions of people potentially at risk.
Security researchers have been warning about rapid injection attacks for a while now.
Google’s Gemini is apparently susceptible to a similar problem, due to being integrated with Gmail, as users can receive emails with hidden prompts (for example, written in a white font on a white background), and if the user asks the tool for something related to that email, it can read and act on the hidden prompt.
Although in some cases the tool’s developers may set up car guards, it is mostly up to the user to be vigilant and not fall for these tricks.
“HackedGPT exposes a fundamental weakness in how large language models judge which information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable.
“Individually, these flaws seem small – but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems are not just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing.”
Tenable said OpenAI remedied “some of the identified vulnerabilities,” adding that “several” remain active in ChatGPT-5, without saying which ones. As a result, the company advises AI vendors to harden defenses against rapid injection by verifying that security mechanisms are working as intended.
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



