Legit Security Enhances AI Security Command Center

September 29, 2025
Legit Security has released a major update to its AI Security Command Center, providing comprehensive visibility into AI-generated code and associated risks, as announced in a press release.

Legit Security has released a significant update to its AI Security Command Center, announced in a press release. This update aims to address the risks associated with AI-generated code and AI models in the software development lifecycle (SDLC).

The AI Security Command Center offers a comprehensive view of AI usage, highlighting when, where, and how AI-generated code and models are utilized. It also identifies potential risks, such as the use of unapproved or low-reputation AI models, which may lack security guardrails.

Key features of the updated platform include real-time visibility into AI-related risks and metrics at both team and application levels. This allows security teams to monitor AI usage and identify areas that require remediation or additional training. The platform's AI heat map helps pinpoint teams that introduce the most security issues, facilitating targeted support and training.

Yoav Stahl, Vice President of Product at Legit, emphasized the importance of this update in providing security teams with the necessary visibility and understanding of AI-related risks, as AI tools become increasingly prevalent in software development.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly, AI Programming Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Programming Weekly

Weekly news about AI tools for software engineers, AI enabled IDE's and much more.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more