Adversa AI Releases 2025 AI Security Incidents Report

August 02, 2025
Adversa AI has unveiled its 2025 AI Security Incidents Report, highlighting significant vulnerabilities in generative and agentic AI systems, as announced in a press release.

Adversa AI has unveiled its 2025 AI Security Incidents Report, highlighting significant vulnerabilities in generative and agentic AI systems, announced in a press release. The report reveals that AI systems, from chatbots to autonomous agents, are increasingly being exploited, with incidents doubling since 2024.

Key findings indicate that 35% of AI security incidents were caused by prompt injections, leading to substantial financial losses. While generative AI was involved in 70% of incidents, agentic AI caused the most severe failures, including crypto thefts and API abuses. The report also notes that breaches often stem from improper validation and infrastructure gaps.

The report includes 17 real-world case studies and provides actionable guidance for improving AI security. Adversa AI, a leader in AI Red Teaming and Agentic AI Security, continues to protect organizations by identifying vulnerabilities before they reach production. For more information, visit Adversa AI.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more