
Pangea Reveals Study on GenAI Vulnerabilities from Prompt Injection Challenge
Pangea has released findings from its global $10,000 Prompt Injection Challenge, conducted in March 2025, announced in a press release. The study involved over 800 participants from 85 countries, generating nearly 330,000 prompt injection attempts using more than 300 million tokens. This initiative aimed to uncover vulnerabilities in AI security guardrails.
The challenge revealed several key insights, including the non-deterministic nature of prompt injection attacks, which can succeed unpredictably. It also highlighted risks such as data leakage and adversarial reconnaissance, where AI applications could be exploited to reveal sensitive information. The study emphasized the necessity of multi-layered defenses, as basic system prompt guardrails were found insufficient, with approximately 1 in 10 prompt injection attempts succeeding.
Pangea's findings underscore the importance of comprehensive security strategies for AI applications. Recommendations include deploying multi-layered guardrails, reducing attack surfaces, and conducting continuous security testing. The full research report, "Defending Against Prompt Injection: Insights from 300K attacks in 30 days," is available for further details.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.
Also, consider following us on social media:
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more