Knostic Secures $11 Million to Enhance AI Data Security

Knostic has raised $11 million to improve security for enterprise AI tools, focusing on need-to-know access controls for large language models.

Knostic has secured $11 million in funding to enhance its AI security solutions, announced in a press release. The company specializes in need-to-know access controls for Generative AI, aiming to prevent data leaks in enterprise large language models (LLMs).

The funding round was led by Bright Pixel Capital, with participation from Silicon Valley CISO Investments (SVCI), DNX Ventures, Seedcamp, and notable angel investors. This investment brings Knostic's total funding to $14 million.

Knostic's technology provides a customizable safety layer for AI tools like Microsoft 365 Copilot and Glean, allowing enterprises to adopt AI without compromising sensitive information. The company has gained recognition at major industry events, winning awards at both the RSA Conference and Black Hat Startup Spotlight Competition in 2024.

Co-founders Gadi Evron and Sounil Yu emphasize the importance of their technology in enabling secure AI adoption. "Need-to-know boundaries allow enterprises to accelerate their AI adoption without compromising security," said Yu. Knostic's solutions are designed to address the challenges of LLM oversharing, a significant concern for businesses in the era of digital transformation.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like AI Funding Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more