Not even adventure is safe – scientists weapon with bedtime for jailbreak ai chatbots and create malware


  • Security researchers have developed a new technique for jailbreak AI -Chatbots
  • The technique required no previous knowledge of malware -coding
  • This involved creating a false scenario to convince the model to create an attack

Despite having no previous experience of malware coding, Cato CTRL threat information researchers have warned that they were able to jailbreak several LLMs, including chatgpt-4o, Deepseek-R1, Deepseek-V3 and Microsoft Copilot using a pretty amazing technique.

The team developed ‘Immersive World’, which uses ‘Narrative Engineering to bypass LLM safety control’ by creating a “detailed fictional world” to normalize limited operations and develop a “fully effective” Chrome Infosteals. Chrome is the most popular browser in the world, with over 3 billion users outlining the extent of the risk this attack is.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top