Rapid injection attacks may ‘never be properly mitigated’ UK NCSC warns


  • UK’s NCSC warns rapid injection attacks can never be fully mitigated due to LLM design
  • Unlike SQL injection, LLMs lack separation between instructions and data, making them inherently vulnerable
  • Developers are encouraged to treat LLMs as “confusing proxies” and design systems that limit compromised outputs

Fast injection attacks, meaning attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions in user-supplied content, may never be properly mitigated.

This is according to the UK’s National Cyber ​​Security Center (NCSC) technical director of platform research, David C, who published the assessment in a blog assessing the technique. In the article, he argues that many compare prompt injection to SQL injection, which is inaccurate, as the former is fundamentally different and arguably more dangerous.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top