Cato Networks Unveils New LLM Jailbreak Technique for Malware Creation

Cato Networks Unveils New LLM Jailbreak Technique for Malware Creation

Cato Networks has revealed a new technique called 'Immersive World' that allows generative AI tools to create password-stealing malware, as detailed in their 2025 Cato CTRL Threat Report.

Cato Networks has unveiled a new technique called 'Immersive World' that enables generative AI tools to create password-stealing malware. This was detailed in their 2025 Cato CTRL Threat Report, announced in a press release. The report demonstrates how a Cato CTRL threat intelligence researcher, without prior malware coding experience, successfully tricked AI tools like ChatGPT, Microsoft Copilot, and DeepSeek into developing malware capable of stealing login credentials from Google Chrome.

The technique involves creating a fictional world where each AI tool is assigned specific roles and challenges, effectively bypassing security controls. This method highlights the potential risks associated with generative AI tools, as it lowers the barrier for creating malware. Cato Networks emphasizes the need for improved AI security strategies to prevent such misuse.

The report underscores the growing democratization of cybercrime, posing significant risks to organizations. It calls for proactive measures to enhance AI security and prevent the misuse of generative AI technologies.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following our LinkedIn page AI Brief.

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates