Akeyless Study Finds Two Thirds of Enterprises Suspect AI Agents Accessed Unauthorized Data

May 12, 2026
Akeyless reports that 67% of organizations using AI agents believe those agents have accessed data beyond their intended scope. The study highlights widespread use of static credentials and slow detection times, with enterprises spending over $1 million on average to manage related security issues.

Akeyless announced in a press release that two thirds of enterprises using AI agents suspect those agents have already accessed data outside their intended scope. The findings come from the 2026 State of AI Agent Identity Security report, based on a survey of 400 IT and security leaders in the United States and United Kingdom.

According to the study, 67% of respondents believe AI agents have accessed unauthorized data, while 61% have revoked or rotated credentials due to suspected exposure. On average, it takes 14 hours to detect a compromised AI agent and nearly a week to contain and remediate the issue. Only 7% of participants expressed confidence that their existing controls could prevent a compromised agent from operating.

Organizations reported spending more than one million dollars on average over the past year managing AI agent identity and security problems. The research also found heavy reliance on persistent credentials such as API keys and static secrets, often embedded in code or workflows. More than 80% of organizations said a single compromised credential could affect multiple systems.

Akeyless noted that most identity management systems are designed for human users, not autonomous systems that act continuously. The company’s platform provides runtime identity security that issues ephemeral credentials, enforces context aware access, and enables full auditability of AI agent activity.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly, Enterprise AI Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more