OpenAI Introduces Aardvark, an AI Agent for Security Research

November 03, 2025
OpenAI has launched Aardvark, a GPT-5-powered autonomous security researcher that scans, validates, and helps patch software vulnerabilities. The agent is currently in private beta for select partners.

OpenAI has introduced Aardvark, an autonomous security researcher powered by GPT-5, now available in private beta according to a company announcement. The agent is designed to help developers and security teams identify and fix vulnerabilities across large codebases.

Aardvark continuously monitors repositories to detect potential vulnerabilities, assess their exploitability, and propose targeted patches. It operates through a multi-stage process that includes repository analysis, commit scanning, validation of exploits in a sandboxed environment, and patch generation using OpenAI Codex. Each finding is accompanied by an annotated explanation and a Codex-generated fix for human review.

The system integrates with GitHub and existing development workflows, enabling continuous protection without interrupting engineering processes. In internal and partner testing, Aardvark identified 92% of known and synthetic vulnerabilities. It has also been applied to open-source projects, where ten findings received CVE identifiers.

Aardvark has been deployed internally at OpenAI for several months and has contributed to strengthening the company’s security posture. The private beta invites select partners to test the system’s performance across diverse environments before broader release.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more