Nvidia Riva Vulnerabilities Expose AI Services to Security Risks

May 02, 2025
Trend Micro has identified vulnerabilities in Nvidia Riva deployments that could lead to unauthorized access and misuse of AI-powered services.
Nvidia Riva Vulnerabilities Expose AI Services to Security Risks

Nvidia has addressed security vulnerabilities in its Riva AI services, which were discovered by Trend Micro. These vulnerabilities, identified as CVE-2025-23242 and CVE-2025-23243, were found in Riva deployments across multiple organizations, potentially allowing unauthorized access and misuse of AI-powered inference services such as speech recognition and text-to-speech processing according to Trend Micro.

The vulnerabilities were primarily due to misconfigured API endpoints that lacked authentication, leaving them open to exploitation. This could result in unauthorized use of GPU resources and API keys, as well as increased risks of data leakage and denial-of-service attacks. Organizations using these services are advised to review their configurations and ensure they are running the latest version of the Riva framework.

Trend Micro recommends implementing secure API gateways, network segmentation, and strong authentication mechanisms to mitigate these risks. Additionally, keeping the Riva framework and its dependencies updated is crucial to protect against known vulnerabilities and potential exploits.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more