
Meta's Ray-Ban Privacy Shift, Denmark's Deepfake Crackdown, & U.S. AI Safety Collaboration - AI Policy Brief #16
May 06, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies around the world. This week, we're covering a range of topics, from Nigeria's House of Representatives proposing a new AI regulation framework to Denmark introducing a law against non-consensual deepfakes. In the United States, a new bill targets the smuggling of Nvidia chips to China, while Meta updates its privacy policy for its Ray-Ban glasses.
On the safety front, OpenAI and Anthropic have partnered with the U.S. government to enhance AI safety measures, and Google's latest AI model, Gemini 2.5 Flash, has underperformed in safety tests. Meanwhile, the Reserve Bank of New Zealand has issued a warning about AI risks. Stay informed with us as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.
National Policy
Nigeria's House of Representatives is developing a national framework to regulate AI, focusing on innovation and ethical responsibility. The California Government has initiated the use of generative AI technologies to improve efficiency in state operations, targeting highway congestion and customer service enhancements.
- Nigeria's House of Representatives Proposes AI Regulation Framework
- California Deploys GenAI to Boost Government Efficiency
International Policy
Denmark has enacted a law against non-consensual deepfakes, while Nvidia is redesigning AI chips for the Chinese market under US export rules. The Italian Senate approved a bill aligning with the EU AI Act, and Thailand launched a National AI Committee. A US bill aims to prevent Nvidia chip smuggling to China, and Sand AI released a video model censoring sensitive images. Nvidia and Anthropic are in a dispute over US chip export controls, with Meta citing tariffs for rising AI infrastructure costs.
- Denmark Introduces Law Against Non-Consensual Deepfakes
- Nvidia Designs China-Specific AI Chips Amid US Export Rules
- Italian Senate Approves AI Bill to Align with EU AI Act
- Thailand Launches National AI Committee
- US Bill Targets Nvidia Chip Smuggling to China
- Sand AI Censors Sensitive Images in New Video Model
- Nvidia Criticizes Anthropic's Claims on Chip Export Restrictions
- Meta Cites Tariffs for AI Infrastructure Cost Increase
- Anthropic Suggests Changes to US AI Chip Export Controls
- Nvidia Criticizes Anthropic's Support for AI Chip Export Controls
Regulatory Actions
Meta has updated its privacy policy for Ray-Ban glasses to utilize more user data for AI training. Figure AI issued cease-and-desist letters to stop unauthorized stock sales. The Cyril Shroff Centre for AI, Law & Regulation launched in India, while a study shows 77% of Australians support stronger AI regulation. The UK's FCA introduced an AI testing service for financial firms, and a Vermont bill on AI in political ads advances. The U.S. Congress passed the Take It Down Act against deepfake abuse.
- Meta Updates Privacy Policy for Ray-Ban Glasses
- Figure AI Issues Cease-and-Desist Over Stock Sales
- India's First AI, Law & Regulation Centre Launched
- Study Shows Australians Support AI Regulation
- FCA Introduces AI Testing Service for Financial Firms
- Vermont Bill on AI in Political Ads Advances
- U.S. Congress Passes Take It Down Act Against Deepfake Abuse
Defense & Security
Heidrick & Struggles has introduced a new practice focused on providing technology solutions for government and defense sectors globally, leveraging expertise from over 30 partners. Nvidia has raised concerns with US lawmakers about Huawei's advancements in AI, citing potential national security and tech industry competition implications.
- Heidrick & Struggles Launches Government & Defense Tech Practice
- Nvidia Raises Concerns About Huawei's AI Progress
Innovation & Investment
GyanAI has launched a new language model aimed at eliminating inaccuracies in AI outputs, focusing on regulated industries such as healthcare and finance. The model's neuro-symbolic architecture enhances reliability and data privacy, making it ideal for mission-critical applications.
AI Safety
OpenAI and Anthropic have partnered with the U.S. government to enhance AI safety research, while TrojAI joins the Cloud Security Alliance to promote responsible AI use. Kevin Systrom criticizes AI chatbots for prioritizing engagement over quality, and OpenAI addresses a bug in ChatGPT that allowed minors to access explicit content. Google's Gemini 2.5 model underperforms in safety tests, and the Reserve Bank of New Zealand warns of AI risks in financial services. MUNIK receives the first ISO/PAS 8800 certification for AI safety.
- OpenAI and Anthropic Partner with U.S. Government on AI Safety
- TrojAI Joins Cloud Security Alliance as AI Corporate Member
- Instagram Co-Founder Criticizes AI Chatbots
- OpenAI Addresses Bug Allowing Minors to Access Erotic Content
- Google's Gemini 2.5 Flash AI Model Underperforms in Safety Tests
- Reserve Bank of New Zealand Warns of AI Risks
- MUNIK Receives First ISO/PAS 8800 Certification for AI Safety
Court Cases, Hearings and Lawsuits
OpenAI, AMD, CoreWeave, and Microsoft CEOs will testify before the US Senate on May 8, 2025, to discuss AI innovation. US District Judge Vince Chhabria questions Meta Platforms' fair use defense in an AI training lawsuit. Promai sues its former CEO for alleged fraud using ChatGPT. Nvidia disputes Anthropic's claims about chip smuggling to China.
- AI CEOs to Testify Before US Senate on May 8, 2025
- Judge Questions Meta's Fair Use in AI Training Case
- Promai Sues Former CEO for Fraud Using ChatGPT
- Nvidia and Anthropic Dispute Smuggling Claims
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.