Researchers are pushing artificial intelligence into malware territory, and the shocking results reveal just how unreliable these so-called dangerous systems are


  • Report finds that LLM-generated malware still fails during basic tests in real-world environments
  • GPT-3.5 produced malicious scripts instantly, revealing major security inconsistencies
  • Improved guardrails in GPT-5 changed output to safer non-malicious alternatives

Despite growing fears surrounding weapons-based LLMs, new experiments have revealed that the potential for malicious output is far from reliable.

Researchers from Netskope tested whether modern language models could support the next wave of autonomous cyber-attacks, with the aim of determining whether these systems could generate working malicious code without relying on hard-coded logic.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top