
ROOST Initiative Launches to Enhance AI Safety with Open-Source Tools
The Robust Open Online Safety Tools (ROOST) initiative was launched at the AI Action Summit in Paris, aiming to enhance AI safety through open-source tools. Developed at Columbia University's Institute of Global Politics, ROOST seeks to build scalable and interoperable safety infrastructure for AI technologies. The initiative is backed by major technology firms and philanthropic organizations, including OpenAI, Google, Discord, and Roblox.
ROOST focuses on providing free, open-source tools to detect, review, and report child sexual abuse material (CSAM). It also plans to leverage large language models (LLMs) to power its safety infrastructure. The initiative has secured $27 million in funding for its first four years, with contributions from various philanthropies and tech companies.
The initiative emphasizes open-source development as a means to foster trust and collaboration in AI safety. By making safety tools widely accessible, ROOST aims to support AI development while maintaining necessary safeguards. This approach contrasts with regulatory-heavy models like the European Commission's AI Act, offering a more flexible governance framework.
ROOST's launch reflects a growing industry effort to prioritize AI safety through open collaboration, providing organizations with the tools needed to integrate robust safety measures while continuing to innovate.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like AI Policy Brief.
Also, consider following our LinkedIn page AI Safety & Regulation.
More from: AI Safety
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates