VUNO's AI Cardiac Risk System Gains EU and UK Certifications
VUNO has announced that its AI-powered cardiac arrest risk management system, VUNO Med-DeepCARS, has received CE MDR certification in the European Union and the UKCA mark in the United Kingdom, as stated in a press release. These certifications, achieved ahead of schedule, will facilitate VUNO's expansion into European and Middle Eastern markets.
The CE MDR certification confirms the clinical safety and effectiveness of DeepCARS across the 27 EU member states, allowing VUNO to pursue partnerships with local AI healthcare providers to enhance hospital adoption and reimbursement processes. In the Middle East, where CE MDR and U.S. FDA certifications are key regulatory references, VUNO plans to complete registrations in key countries by the end of the year and begin full-scale operations by 2026.
DeepCARS is designed to monitor the risk of in-hospital cardiac arrest within 24 hours by analyzing vital signs such as blood pressure, heart rate, respiratory rate, and body temperature. It is already implemented in over 130 hospitals in South Korea, covering more than 48,000 hospital beds.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like Life AI Weekly.
Also, consider following us on social media:
More from: Healthcare & Life Sciences
Avio Health Unveils Functional Medicine LLM for Personalized Healthcare
Novo Nordisk Foundation Allocates DKK 479 Million for AI and Health Projects
Infinitus Expands AI Partnership with Salesforce for Healthcare
RevelAi Health Secures $3.1 Million to Enhance AI in Musculoskeletal Care
Subscribe to Life AI Weekly
Weekly coverage of AI applications in healthcare, drug development, biotechnology research, and genomics breakthroughs.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more