Orca Security Report Finds AI Credential Leaks in 42% of Organizations
Orca Security announced in a press release its 2026 State of Application Security Report, revealing that 41.88% of production organizations have leaked AI or machine learning credentials. The analysis found Hugging Face tokens exposed in 28.49% of organizations, OpenAI credentials in 18.39%, Databricks in 11.92%, and Anthropic in 10.10%.
The report highlights persistent vulnerabilities across software supply chains and cloud environments. More than 81% of organizations deploy vulnerable dependencies, 77% leave high or critical container vulnerabilities unpatched for over 90 days, and 46.2% remain exposed to the Log4Shell flaw years after its disclosure.
Orca’s research also found that 21.68% of CI/CD pipelines maintain overly permissive token permissions and 30.6% do not require signed commits. Nearly 58% of organizations still have identity and access management users without multi-factor authentication enabled, increasing exposure to credential-based attacks.
The data, collected from 1,079 production organizations across the U.S. and Europe between Q3 2025 and Q1 2026, shows that while cloud-native and AI adoption accelerate, foundational security controls have not kept pace. Orca’s report calls for stronger integration of security practices throughout the software development lifecycle.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Cybersecurity AI Weekly or Daily AI Brief.
Also, consider following us on social media:
More from: Cybersecurity
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more