Chinese AI assistant DeepSeek-R1 struggles with sensitive topics, producing broken code and security disasters for enterprise developers


  • Experts find that DeepSeek-R1 produces dangerously insecure code when political expressions are included in prompts
  • Half of the politically sensitive prompts trigger DeepSeek-R1 to refuse to generate any code
  • Hard-coded secrets and unsafe input handling often surface during politically charged prompts

When it was released in January 2025, DeepSeek-R1, a Chinese Large Language Model (LLM), created a frenzy and has since been widely adopted as a coding assistant.

However, independent tests by CrowdStrike claim that the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top