Cyberhaven Report Highlights Security Risks in Corporate AI Tools

Cyberhaven's latest report reveals that 71.7% of AI tools used in workplaces pose high or critical data security risks, with 83.8% of enterprise data flowing to these unsecured platforms.

Cyberhaven has released its "2025 AI Adoption and Risk Report," revealing significant data security risks associated with corporate AI tools. Announced in a press release, the report indicates that 71.7% of AI tools used in workplaces are classified as high or critical risk. Furthermore, 83.8% of enterprise data is being directed to these unsecured platforms.

The analysis, which covers AI usage patterns of 7 million workers, highlights the rapid growth of AI adoption in the workplace. Over the past 24 months, AI usage frequency has increased by 61 times, with the highest adoption rates among tech company employees. However, this surge in AI usage has also led to increased exposure of sensitive corporate data, with 34.8% of data input into AI tools being classified as sensitive.

The report underscores the need for organizations to balance the transformative potential of AI with robust data security measures to protect valuable information assets.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more