- UK’s NCSC warns rapid injection attacks can never be fully mitigated due to LLM design
- Unlike SQL injection, LLMs lack separation between instructions and data, making them inherently vulnerable
- Developers are encouraged to treat LLMs as “confusing proxies” and design systems that limit compromised outputs
Fast injection attacks, meaning attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions in user-supplied content, may never be properly mitigated.
This is according to the UK’s National Cyber Security Center (NCSC) technical director of platform research, David C, who published the assessment in a blog assessing the technique. In the article, he argues that many compare prompt injection to SQL injection, which is inaccurate, as the former is fundamentally different and arguably more dangerous.
The main difference between the two is the fact that LLMs do not enforce any real separation between instructions and data.
Inherently confusing proxies
“While initially reported as command execution, the underlying problem has turned out to be more fundamental than classic client/server vulnerabilities,” he writes. “Current large language models (LLMs) simply do not enforce a safety boundary between instructions and data in a prompt.”
Rapid injection attacks are regularly reported in systems using generative AI (genAI) and are OWASP’s #1 attack to consider when ‘developing and securing generative AI and large language model applications’.
In classic vulnerabilities, data and instructions are handled differently, but LLMs operate solely on next-token prediction, meaning they inherently cannot distinguish user-supplied data from operational instructions. “There is a good chance that a rapid injection will never be properly damped in the same way,” he added.
The NCSC official also claims that the industry is repeating the same mistakes it made in the early 2000s, when SQL injection was poorly understood and thus widely exploited.
But SQL injection eventually became better understood and new security measures became standard. For LLMs, developers should treat them as “inherently confusing proxies,” and thus design systems that limit the consequences of compromised outputs.
If an application cannot tolerate residual risk, he warns, it may simply not be an appropriate use case for an LLM.
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



