Vectara Introduces Hallucination Corrector for Enterprise AI

Vectara has launched a Hallucination Corrector to enhance the reliability of enterprise AI systems, reducing hallucination rates to about 0.9%, announced in a press release.

Vectara has launched a new Hallucination Corrector to improve the reliability of enterprise AI systems, announced in a press release. This tool is designed to detect and mitigate unreliable responses from AI models, a common issue known as hallucinations.

Hallucinations occur when AI models provide false information with confidence. Vectara's Hallucination Corrector works alongside its Hughes Hallucination Evaluation Model (HHEM) to reduce these occurrences. The HHEM scores AI-generated responses against source documents to determine accuracy, and the Corrector provides explanations and corrections for any inaccuracies.

In initial tests, the Hallucination Corrector reduced hallucination rates in enterprise AI systems to about 0.9%. This feature is integrated into Vectara's platform and offers various options for users, including automatic corrections and detailed explanations for expert analysis. The Corrector aims to meet the high accuracy standards required in regulated industries such as finance, healthcare, and law.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like Enterprise AI Brief.

Also, consider following us on social media:

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates

Whitepaper

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

The 2025 AI Index by Stanford HAI provides a comprehensive overview of the global state of artificial intelligence, highlighting significant advancements in AI capabilities, investment, and regulation. The report details improvements in AI performance, increased adoption in various sectors, and the growing global optimism towards AI, despite ongoing challenges in reasoning and trust. It serves as a critical resource for policymakers, researchers, and industry leaders to understand AI's rapid evolution and its implications.

Read more