
OpenAI's AI Proposals to White House, China's AI Labeling Rules, and EU's AI Act - AI Policy Brief #9
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, government policies, and compliance requirements worldwide. This week, we're covering a range of topics from OpenAI's proposals to the White House on AI leadership and copyright regulations, to the Chinese Government setting new rules for AI content labeling. Additionally, the EU is making strides with its third draft of the AI Act, while Tajikistan is taking a proactive step by proposing a UN resolution on AI cooperation.
In the realm of AI safety, companies like Infosys, KPMG, and Accenture are ramping up their hiring to boost trust and safety in AI, and Anthropic has detected deceptive objectives in its AI model, Claude. On the defense front, the U.S. House has passed a bill focusing on AI and nanotech for border security, and NASA has acquired a license from Clearview AI. Stay informed with us as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.
National Policy
OpenAI submitted proposals to the White House to enhance U.S. AI leadership, focusing on national security and innovation. Additionally, OpenAI urged the U.S. to relax AI copyright regulations as part of a broader AI Action Plan. Palantir also recommended an AI Action Plan to the White House to sustain U.S. leadership in AI.
- OpenAI Urges U.S. to Ease AI Training Regulations
- OpenAI Proposes Recommendations for U.S. AI Action Plan
- Palantir Recommends AI Action Plan to White House
International Policy
Google has proposed changes to copyright and export regulations, while the Chinese Government mandates labeling of AI-generated content. OpenAI suggests banning PRC-supported AI models, and the EU seeks to boost GPU stockpiles amid US export limits. Tajikistan proposes a UN resolution for AI cooperation, and India hosts an AI readiness consultation with UNESCO.
- Google Advocates for Relaxed AI Copyright and Export Rules
- Chinese Government Sets AI Content Labeling Rules
- OpenAI Proposes Ban on PRC-Supported AI Models
- China Monitors AI Startup DeepSeek's Rapid Success
- EU Seeks to Boost GPU Stockpile Amid US Chip Export Limits
- Tajikistan Proposes UN Resolution on AI Cooperation
- EU Releases Third Draft of AI Act Rules
- MeitY and UNESCO Host AI Readiness Consultation in Hyderabad
Regulatory Actions
IOSCO has released a report on AI's impact in capital markets, emphasizing investor protection and market integrity. Congressman Jim Jordan is probing Google and OpenAI over alleged censorship by the Biden Administration. Google and Meta face criticism for restrictive AI model licenses, while they and OpenAI advocate for AI training as fair use. Brazil's ANPD plans AI regulation ahead of a legal framework, and Spain enforces fines for unlabeled AI content. The UK FCA is addressing AI challenges in banking compliance.
- Jim Jordan Questions Big Tech on Biden Censorship
- Google and Meta AI Models Criticized for Licensing
- Google and OpenAI Advocate for AI Training as Fair Use
- Brazil's ANPD to Regulate AI Before Legal Framework
- California Senator Proposes AI Regulation in Employment
- Spain Approves Fines for Unlabeled AI Content
- FCA Roundtable on AI Challenges for UK Banks
- IOSCO Releases AI in Capital Markets Report
Defense & Security
Google's Threat Intelligence Group reports on the misuse of generative AI by threat actors, while NewsGuard launches FAILSafe to counter foreign disinformation. The U.S. House passes a bill to integrate AI and nanotech in border security, and NASA acquires a Clearview AI license for its Inspector General's office.
- Google Report on Misuse of Generative AI by Threat Actors
- NewsGuard Introduces FAILSafe for AI Protection
- U.S. House Passes AI and Nanotech Border Security Bill
- NASA Acquires Clearview AI License
- SCSP and Coursera Offer Free AI Course for National Security
- NGA Plans Increased AI Resources for 2025
- Palantir CEO Advocates for Japan-US AI Defense Collaboration
AI Safety
Info-Tech Research Group has introduced a four-step plan to assist IT leaders in managing AI compliance challenges. Infosys, KPMG, and Accenture are expanding recruitment for AI trust and safety roles, with a 36% increase this year. Researchers at Anthropic have developed methods to identify when AI systems are hiding their true objectives. Tumeryk has launched the AI Trust Score™ to evaluate risks in Generative AI systems. Los Alamos National Laboratory introduced the LoRID method to defend neural networks from adversarial attacks. The Open-Source AI Foundation has launched The 20% Project to reform the U.S. criminal justice system using open-source AI.
- Infosys, KPMG, and Accenture Expand AI Trust and Safety Hiring
- Anthropic Develops AI Objective Detection Techniques
- Tumeryk Introduces AI Trust Score for AI Safety
- Los Alamos Lab Develops LoRID for Neural Network Defense
- Open-Source AI Foundation Launches Criminal Justice Reform Initiative
- Info-Tech Releases AI Compliance Strategy
Court Cases, Hearings and Lawsuits
Changshu People’s Court in China has ruled that AI-generated images are eligible for copyright protection, setting a legal precedent. Meta is being sued in France for allegedly using protected content to train AI models without permission. A December trial date is set for the lawsuit between Elon Musk and OpenAI over its for-profit model shift.
- AI-Generated Images Gain Copyright Protection in China
- OpenAI and Elon Musk Set December Trial Date
- Meta Sued in France Over AI Training Practices
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.