Malicious LLMs allow even unskilled hackers to create dangerous new malware


  • Hackers use untethered LLMs such as WormGPT 4 and KawaiiGPT for cybercrime
  • WormGPT 4 enables encryptions, exfiltration tools and ransom notes; KawaiiGPT creates phishing scripts
  • Both models have hundreds of Telegram subscribers, lowering entry barriers for cybercriminals

Most generative AI tools in use today are not unrestricted—for example, they may not teach people how to make bombs or how to commit suicide—nor may they facilitate cybercrime.

While some hackers try to “jailbreak” the tools by working around these barriers with clever prompts, others simply build their own, completely untethered Large Language Models (LLM) to be used exclusively for cybercrime.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top