Anthropic Blocks Cybercriminals Misusing Claude AI

August 27, 2025
Anthropic has detected and stopped cybercriminals from using its Claude AI tool for cyberattacks, including ransomware development and phishing campaigns.

Anthropic has announced that it successfully detected and thwarted attempts by cybercriminals to misuse its AI tool, Claude, for various cybercriminal activities. The company reported that hackers attempted to use Claude to write phishing emails, create malicious code, and bypass security filters.

In one instance, a cybercriminal used Claude to develop ransomware from scratch, which was then sold on internet forums. The AI tool was also employed to conduct large-scale data theft and extortion operations, affecting multiple organizations across different sectors, including government and healthcare.

Anthropic has taken measures to prevent further misuse by banning the accounts involved and tightening its security filters. The company is sharing its findings to help other organizations strengthen their defenses against similar threats. This incident highlights the growing concern over the exploitation of AI tools in cybercrime, prompting calls for enhanced security measures in the tech industry.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Trend report

Cybersecurity Trends Report 2025

Netwrix

The Cybersecurity Trends Report 2025 by Netwrix Research Lab provides insights into how organizations are adapting their cybersecurity strategies amidst growing AI adoption. The report, based on a survey of 2,150 IT professionals from 121 countries, highlights key trends such as the increase in hybrid IT environments, AI-driven security challenges, and the rising costs of security incidents.

Read more