
Pangea Launches AI Security Guardrails and $10,000 Jailbreak Competition
Pangea has unveiled a suite of AI security guardrails, including AI Guard and Prompt Guard, to address risks associated with large language models (LLMs) and accelerate AI development. Announced in a press release, these tools aim to protect against threats such as prompt injection and sensitive information disclosure.
AI Guard is designed to prevent data leakage and block malicious content, utilizing over a dozen detection technologies to inspect AI interactions. It can redact, block, or disarm offending content while maintaining data structure through format-preserving encryption. Prompt Guard focuses on analyzing prompts to block jailbreak attempts and organizational limit violations, using a defense-in-depth approach with heuristics and custom-trained models.
To demonstrate the complexity of AI security threats, Pangea is launching "The Great AI Escape" Virtual Escape Room Challenge. This online competition features three themed escape rooms where participants use prompt engineering techniques to bypass controls. The challenge offers $10,000 in total prize money, with registration now open.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like AI Policy Brief.
Also, consider following us on social media:
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more