Netskope Expands AI Security with New DSPM Features
Netskope has announced enhancements to its Netskope One platform, introducing new Data Security Posture Management (DSPM) capabilities to bolster AI security, announced in a press release. These updates aim to provide comprehensive protection for AI applications by managing risks associated with sensitive data and AI model interactions.
The Netskope One platform now offers expanded visibility and control over data used in training both public and private large language models (LLMs). This includes preventing sensitive data from being inadvertently fed into LLMs and assessing AI-related risks through data classification and exposure insights. The platform's DSPM features enable organizations to automate policy enforcement, ensuring only approved data is utilized in AI processes.
Netskope's enhancements address the growing complexity of AI ecosystems, which include public generative AI applications, private AI applications, and AI agents. By providing a unified approach to AI security, Netskope One helps organizations minimize risks while maintaining productivity in AI usage. The platform's advanced discovery and classification capabilities support safe AI development by identifying and managing data interactions with LLMs and AI agents.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.
Also, consider following us on social media:
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more