Seeing Machines Appoints New CTO and Chief Safety Officer
Seeing Machines has announced the appointment of John Noble as Chief Technology Officer and Dr. Mike Lenné as Chief Safety Officer to enhance its technology and safety strategies.
Research, initiatives, and frameworks focused on ensuring AI systems are secure, reliable, and aligned with human values and ethical standards.
Seeing Machines has announced the appointment of John Noble as Chief Technology Officer and Dr. Mike Lenné as Chief Safety Officer to enhance its technology and safety strategies.
Cloudflare has introduced 'Cloudflare for AI', a suite of tools designed to enhance the security and control of AI applications for businesses, as announced in a press release.
Innodata has announced the beta launch of its Generative AI Test & Evaluation Platform, powered by NVIDIA technology, to enhance AI model safety and performance.
IFS has been appointed as an Advisory Board Member of the UK's All-Party Parliamentary Group on AI, contributing to AI policy discussions alongside major industry players.
Anthropic has developed methods to identify when AI systems conceal their true objectives, a significant step in AI safety research. The company trained its AI assistant, Claude, to hide its goals, then successfully detected these hidden agendas using various auditing techniques.
NewsGuard has introduced the FAILSafe service to shield AI models from foreign influence operations, particularly targeting Russian, Chinese, and Iranian disinformation.
Google has updated its Responsible AI team webpage, removing references to 'diversity' and 'equity'. This change follows similar actions by other tech companies.
CompScience has teamed up with the California Manufacturers & Technology Association and Bender Insurance Solutions to launch an AI-driven program aimed at reducing workplace injuries and insurance costs for California manufacturers.
HiddenLayer's latest report reveals a significant increase in AI breaches, with 74% of organizations experiencing incidents in 2024. The report emphasizes the need for enhanced security measures as AI adoption grows.
Advanced Brain Methodologies Inc. (ABM) has announced the launch of the world's first Emotion Processing Unit (EPU) chip, a groundbreaking neuro-chip designed to revolutionize mental health and cognitive performance.
Safe Pro Group has appointed Young J. Bang, former Principal Deputy Assistant Secretary of the Army, to spearhead the integration of AI technology into U.S. military systems, announced in a press release.
Anthropic has unveiled a new approach called hierarchical summarization to enhance AI monitoring, particularly for its computer use capabilities.
Infosys has launched an open-source Responsible AI Toolkit to enhance trust and transparency in AI, announced in a press release. The toolkit is part of the Infosys Topaz Responsible AI Suite.
Leidos and SeeTrue have announced a collaboration to improve AI-powered threat detection technology for airport security and customs screenings.
OpenAI has banned accounts from China and North Korea for using ChatGPT in surveillance and influence operations, according to Reuters.
Exabits has partnered with Phala Network to offer TEE-enabled GPU clusters for secure AI data processing, announced in a press release.
DeepSeek, a Chinese AI startup, plans to open-source five repositories next week to promote transparency and community-driven innovation, amid ongoing privacy concerns.
Securiti has partnered with Databricks to integrate Databricks Mosaic AI and Delta tables into its Gencore AI solution, enabling safer enterprise AI development, according to a press release.
Giskard has launched Phare, an open and independent benchmark to assess AI models on security dimensions like hallucination and bias, with Google DeepMind as a research partner.
Former OpenAI CTO Mira Murati has launched a new AI startup, Thinking Machine Labs, with a team of top researchers and engineers, including many from OpenAI.