
China Bans Nvidia AI Chips, Google DeepMind Enhances Safety, and Disney Sues MiniMax - AI Policy Brief #36
September 23, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, we're diving into significant international policy shifts, including China's recent decision to ban tech firms from purchasing AI chips from Nvidia. This move is part of a broader strategy to control the flow of advanced technology within its borders. Meanwhile, in the UK, tech leaders gathered for a state banquet hosted by former President Trump, highlighting the ongoing dialogue between political figures and the tech industry.
On the AI safety front, Google DeepMind has updated its AI safety framework, aiming to enhance the reliability and security of its AI systems. In a related development, OpenAI has imposed restrictions on ChatGPT usage for individuals under 18, reflecting growing concerns over AI's impact on younger users. Additionally, the Indian government is set to release its AI governance framework by September 28, marking a significant step in its techno-legal strategy for AI safety. Stay tuned for more insights and updates in this rapidly evolving field.
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, we're diving into significant international policy shifts, including China's recent decision to ban tech firms from purchasing AI chips from Nvidia. This move is part of a broader strategy to control the flow of advanced technology within its borders. Meanwhile, in the UK, tech leaders gathered for a state banquet hosted by former President Trump, highlighting the ongoing dialogue between political figures and the tech industry.
On the AI safety front, Google DeepMind has updated its AI safety framework, aiming to enhance the reliability and security of its AI systems. In a related development, OpenAI has imposed restrictions on ChatGPT usage for individuals under 18, reflecting growing concerns over AI's impact on younger users. Additionally, the Indian government is set to release its AI governance framework by September 28, marking a significant step in its techno-legal strategy for AI safety. Stay tuned for more insights and updates in this rapidly evolving field.
International Policy
The Cyberspace Administration of China has banned domestic tech companies from purchasing Nvidia's AI chips, specifically the RTX Pro 6000D, to boost local chip production. Meanwhile, tech leaders like Tim Cook and Sam Altman attended a state banquet in the U.K. during President Trump's visit, marking the signing of the Tech Prosperity Deal to enhance AI collaboration.
- China Bans Tech Firms from Buying Nvidia AI Chips
- Tech Leaders Attend Trump's UK State Banquet
- Anthropic CEO Warns Against US AI Chip Sales to China
Defense & Security
Anthropic has restricted the use of its Claude models for domestic surveillance, impacting agencies such as the FBI and Secret Service and creating tensions with the Trump administration.
- Anthropic Restricts Claude Models for Surveillance
- Albania Appoints AI Minister to Tackle Corruption
Innovation & Investment
Niti Aayog reports that AI could boost India's GDP by $500-600 billion by 2035, with Finance Minister Nirmala Sitharaman emphasizing the need for responsible AI regulations. Meanwhile, Munich-based startup Kertos has secured €14 million in Series A funding to enhance its AI compliance platform for European businesses.
- AI Could Boost India's GDP by $500-600 Billion by 2035
- Kertos Secures €14M to Enhance AI Compliance Platform
- Force for Good Proposes $125 Trillion AI and Sustainability Plan
- webAI Appoints Dr. Jason Rathje as President of Public Sector
AI Safety
Google DeepMind announced the third version of its Frontier Safety Framework on September 22, 2025, enhancing risk management for advanced AI models. Meanwhile, OpenAI has introduced new restrictions for ChatGPT users under 18, aiming to improve safety by limiting discussions on sensitive topics.
- Google DeepMind Updates Frontier Safety Framework for AI Risk Management
- OpenAI Implements Safety Policies for Underage ChatGPT Users
- OpenAI and Apollo Research Study AI Deception
- Indian Government to Release AI Governance Framework by September 28
- India's Techno-Legal AI Safety Strategy Unveiled
- Anthropic Report: Claude AI Automates 77% of Tasks
Court Cases, Hearings and Lawsuits
In recent legal developments, Disney, Universal, and Warner Bros Discovery have filed a lawsuit against Chinese AI company MiniMax for allegedly using their characters without authorization in AI-generated content. Meanwhile, California attorney Amir Mostafavi has been fined $10,000 for submitting an appeal with fabricated quotes generated by ChatGPT, highlighting the necessity of verifying AI-generated content in legal proceedings.
- Disney, Universal, Warner Bros Sue MiniMax for Copyright Infringement
- California Attorney Fined for Using Fake AI-Generated Citations
- Parents Advocate for AI Chatbot Regulation in Senate
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Data Protection & Privacy
08/28
Anthropic Introduces Data Sharing Option for AI Training
08/21
Grok Chats Become Publicly Searchable on Google
08/15
PrivacyCheq Introduces aiCheq for AI Privacy Compliance
08/13
O.NE People and Prighter Partner for AI-Driven Privacy Compliance
08/07
MIND Launches Autonomous DLP Platform for Simplified Data Protection
More from: AI Safety
09/19
OpenAI Research Tackles AI Scheming with New Techniques
09/06
Google's Gemini AI Labeled 'High Risk' for Kids by Common Sense Media
09/03
OpenAI Introduces Parental Controls and Sensitive Conversation Routing in ChatGPT
08/27
JLT Mobile Computers and Linnaeus University Develop AI Safety Solution for Vehicles
08/21
Anthropic Develops AI Tool to Monitor Nuclear Conversations