Vectara Introduces Hallucination Corrector for Enterprise AI
Vectara has launched a Hallucination Corrector to enhance the reliability of enterprise AI systems, reducing hallucination rates to about 0.9%, announced in a press release.
Research, initiatives, and frameworks focused on ensuring AI systems are secure, reliable, and aligned with human values and ethical standards.
Vectara has launched a Hallucination Corrector to enhance the reliability of enterprise AI systems, reducing hallucination rates to about 0.9%, announced in a press release.
Marty Sprinzen, CEO of Vantiq, will keynote the Smart Cities Summit North America, discussing AI's impact on public sector operations.
GyanAI has launched a new AI model designed to eliminate hallucinations, ensuring reliability and data privacy for enterprises, as announced in a press release.
MUNIK has been awarded the world's first ISO/PAS 8800 certification by DEKRA for its AI safety development process in the automotive sector.
TrojAI has joined the Cloud Security Alliance as an AI Corporate Member, becoming a strategic partner in the CSA's AI Safety Ambassador program.
Bloomberg researchers have published two papers revealing that retrieval-augmented generation (RAG) LLMs may be less safe than previously thought, particularly in financial services.
OpenAI's latest AI models, o3 and o4-mini, are being used for reverse location searches from photos, raising privacy concerns.
Hong Kong-based AI startup viAct has raised $7.3 million in Series A funding led by Venturewave Capital, with participation from Singtel Innov8 and others, to enhance its AI safety solutions and expand globally.
OpenAI has revised its Preparedness Framework, allowing for adjustments in safety requirements if competitors release high-risk AI systems without similar safeguards.
NTT Research has launched the Physics of Artificial Intelligence Group to advance AI understanding and trust, led by Dr. Hidenori Tanaka.
DeepMind has released a detailed 145-page paper outlining its approach to AGI safety, predicting the potential arrival of AGI by 2030 and highlighting significant risks and mitigation strategies.
Collaborative Digital Innovations has partnered with Purdue University's CERIAS to advance AI security and compliance research, focusing on threat detection and regulatory compliance.
OWASP has promoted its GenAI Security Project to flagship status, reflecting its expanded focus on generative AI security. The project now includes over 600 experts and offers comprehensive resources for secure AI development.
Seeing Machines has announced the appointment of John Noble as Chief Technology Officer and Dr. Mike Lenné as Chief Safety Officer to enhance its technology and safety strategies.
Cloudflare has introduced 'Cloudflare for AI', a suite of tools designed to enhance the security and control of AI applications for businesses, as announced in a press release.
Innodata has announced the beta launch of its Generative AI Test & Evaluation Platform, powered by NVIDIA technology, to enhance AI model safety and performance.
IFS has been appointed as an Advisory Board Member of the UK's All-Party Parliamentary Group on AI, contributing to AI policy discussions alongside major industry players.
Anthropic has developed methods to identify when AI systems conceal their true objectives, a significant step in AI safety research. The company trained its AI assistant, Claude, to hide its goals, then successfully detected these hidden agendas using various auditing techniques.
NewsGuard has introduced the FAILSafe service to shield AI models from foreign influence operations, particularly targeting Russian, Chinese, and Iranian disinformation.
Google has updated its Responsible AI team webpage, removing references to 'diversity' and 'equity'. This change follows similar actions by other tech companies.
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.