Researchers claim that ChatGPT has a host of troubling security flaws – here’s what they found


  • Tenable says it found seven rapid injection flaws in ChatGPT-4o, dubbed the “HackedGPT” attack chain
  • Vulnerabilities include hidden commands, memory persistence, and security bypasses via trusted wrappers
  • OpenAI fixed some issues in GPT-5; others remain, prompting calls for stronger defenses

ChatGPT has a number of security issues that could allow threat actors to insert hidden commands, steal sensitive data and spread misinformation into the AI ​​tool, security researchers say.

Recently, security experts from Tenable OpenAI’s ChatGPT-4o tested and found seven vulnerabilities, which they collectively named HackedGPT. These include:

  • Indirect prompt injection via trusted websites (hides commands on public websites that GPT can unknowingly follow when you read the content)
  • 0-click indirect prompt injection in search context (GPT searches the web and finds a page with hidden malicious code. Asking questions may unwittingly force GPT to follow instructions)
  • Request injection via 1-click (a twist on phishing where a user clicks on a link with hidden GPT commands)
  • Security Mechanism Bypass (wrapping malicious links in trusted wrappers, tricking GPT into showing links to the user)
  • Conversation injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads and effectively prompt-injects itself).
  • Hides malicious content (malicious instructions may be hidden in code or markdown text)
  • Persistent memory injection (malicious instructions can be placed in saved chats, causing the model to repeat the commands and continuously leak data).

Urges to harden the defense

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top