Cyberhaven Report Highlights Security Risks in Corporate AI Tools

Cyberhaven's latest report reveals that 71.7% of AI tools used in workplaces pose high or critical data security risks, with 83.8% of enterprise data flowing to these unsecured platforms.

Cyberhaven has released its "2025 AI Adoption and Risk Report," revealing significant data security risks associated with corporate AI tools. Announced in a press release, the report indicates that 71.7% of AI tools used in workplaces are classified as high or critical risk. Furthermore, 83.8% of enterprise data is being directed to these unsecured platforms.

The analysis, which covers AI usage patterns of 7 million workers, highlights the rapid growth of AI adoption in the workplace. Over the past 24 months, AI usage frequency has increased by 61 times, with the highest adoption rates among tech company employees. However, this surge in AI usage has also led to increased exposure of sensitive corporate data, with 34.8% of data input into AI tools being classified as sensitive.

The report underscores the need for organizations to balance the transformative potential of AI with robust data security measures to protect valuable information assets.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like Cybersecurity AI Weekly.

Also, consider following us on social media:

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates

Trend report

Cybersecurity Trends Report 2025

Netwrix

The Cybersecurity Trends Report 2025 by Netwrix Research Lab provides insights into how organizations are adapting their cybersecurity strategies amidst growing AI adoption. The report, based on a survey of 2,150 IT professionals from 121 countries, highlights key trends such as the increase in hybrid IT environments, AI-driven security challenges, and the rising costs of security incidents.

Read more