Pangea Launches AI Security Guardrails and $10,000 Jailbreak Competition

Pangea Launches AI Security Guardrails and $10,000 Jailbreak Competition

Image: Pangea
Pangea has announced the availability of AI Guard and Prompt Guard to enhance AI security, alongside a $10,000 jailbreak competition to highlight AI vulnerabilities.

Pangea has unveiled a suite of AI security guardrails, including AI Guard and Prompt Guard, to address risks associated with large language models (LLMs) and accelerate AI development. Announced in a press release, these tools aim to protect against threats such as prompt injection and sensitive information disclosure.

AI Guard is designed to prevent data leakage and block malicious content, utilizing over a dozen detection technologies to inspect AI interactions. It can redact, block, or disarm offending content while maintaining data structure through format-preserving encryption. Prompt Guard focuses on analyzing prompts to block jailbreak attempts and organizational limit violations, using a defense-in-depth approach with heuristics and custom-trained models.

To demonstrate the complexity of AI security threats, Pangea is launching "The Great AI Escape" Virtual Escape Room Challenge. This online competition features three themed escape rooms where participants use prompt engineering techniques to bypass controls. The challenge offers $10,000 in total prize money, with registration now open.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like AI Policy Brief.

Also, consider following our LinkedIn page AI Safety & Regulation.

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates