Google Threat Intelligence Group Reports Surge in AI Misuse for Cyber Operations

February 16, 2026
The Google Threat Intelligence Group (GTIG) has released a report detailing how threat actors are increasingly using AI for phishing, reconnaissance, and malware development, while also conducting model extraction attacks targeting proprietary AI systems.
Google Threat Intelligence Group Reports Surge in AI Misuse for Cyber Operations

The Google Threat Intelligence Group has published new findings showing a global rise in the misuse of artificial intelligence by both state-sponsored and private-sector threat actors. The report identifies widespread use of AI tools to enhance phishing, reconnaissance, and malware creation activities across multiple regions.

According to the report, government-backed actors from North Korea, Iran, China, and Russia are using large language models to generate realistic phishing content, conduct target research, and support coding tasks. These actors leveraged AI systems, including Gemini, to automate reconnaissance and improve the credibility of their social engineering campaigns. Google has taken actions to disable accounts and assets linked to these malicious operations.

The report also highlights a growing number of model extraction or “distillation” attacks, in which adversaries attempt to replicate the capabilities of proprietary AI models through repeated API queries. These activities were primarily attributed to private-sector entities and researchers seeking to clone model logic. Google stated that it has detected and mitigated these attacks to protect its AI systems.

In addition, the report outlines experimental uses of AI in malware development. Examples include the HONESTCUE malware family, which used Gemini’s API to generate code for secondary payloads, and the COINBAIT phishing kit, likely built using AI code generation tools. These cases illustrate how attackers are integrating AI into traditional cyber operations to increase speed and efficiency.

Google noted that while no direct attacks on frontier AI models have been observed from advanced persistent threat groups, the company continues to strengthen its security measures to prevent misuse. GTIG emphasized that organizations operating AI services should monitor for extraction patterns and apply safeguards to protect proprietary systems.

We hope you enjoyed this article.

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation

ModelOp

The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.

Read more