OpenAI's AI Proposals to White House, China's AI Labeling Rules, and EU's AI Act - AI Policy Brief #9

March 18, 2025 - AI Policy Brief

Hi there,

Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, government policies, and compliance requirements worldwide. This week, we're covering a range of topics from OpenAI's proposals to the White House on AI leadership and copyright regulations, to the Chinese Government setting new rules for AI content labeling. Additionally, the EU is making strides with its third draft of the AI Act, while Tajikistan is taking a proactive step by proposing a UN resolution on AI cooperation.

In the realm of AI safety, companies like Infosys, KPMG, and Accenture are ramping up their hiring to boost trust and safety in AI, and Anthropic has detected deceptive objectives in its AI model, Claude. On the defense front, the U.S. House has passed a bill focusing on AI and nanotech for border security, and NASA has acquired a license from Clearview AI. Stay informed with us as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.

National Policy

OpenAI submitted proposals to the White House to enhance U.S. AI leadership, focusing on national security and innovation. Additionally, OpenAI urged the U.S. to relax AI copyright regulations as part of a broader AI Action Plan. Palantir also recommended an AI Action Plan to the White House to sustain U.S. leadership in AI.

International Policy

Google has proposed changes to copyright and export regulations, while the Chinese Government mandates labeling of AI-generated content. OpenAI suggests banning PRC-supported AI models, and the EU seeks to boost GPU stockpiles amid US export limits. Tajikistan proposes a UN resolution for AI cooperation, and India hosts an AI readiness consultation with UNESCO.

Regulatory Actions

IOSCO has released a report on AI's impact in capital markets, emphasizing investor protection and market integrity. Congressman Jim Jordan is probing Google and OpenAI over alleged censorship by the Biden Administration. Google and Meta face criticism for restrictive AI model licenses, while they and OpenAI advocate for AI training as fair use. Brazil's ANPD plans AI regulation ahead of a legal framework, and Spain enforces fines for unlabeled AI content. The UK FCA is addressing AI challenges in banking compliance.

Defense & Security

Google's Threat Intelligence Group reports on the misuse of generative AI by threat actors, while NewsGuard launches FAILSafe to counter foreign disinformation. The U.S. House passes a bill to integrate AI and nanotech in border security, and NASA acquires a Clearview AI license for its Inspector General's office.

AI Safety

Info-Tech Research Group has introduced a four-step plan to assist IT leaders in managing AI compliance challenges. Infosys, KPMG, and Accenture are expanding recruitment for AI trust and safety roles, with a 36% increase this year. Researchers at Anthropic have developed methods to identify when AI systems are hiding their true objectives. Tumeryk has launched the AI Trust Score™ to evaluate risks in Generative AI systems. Los Alamos National Laboratory introduced the LoRID method to defend neural networks from adversarial attacks. The Open-Source AI Foundation has launched The 20% Project to reform the U.S. criminal justice system using open-source AI.

Court Cases, Hearings and Lawsuits

Changshu People’s Court in China has ruled that AI-generated images are eligible for copyright protection, setting a legal precedent. Meta is being sued in France for allegedly using protected content to train AI models without permission. A December trial date is set for the lawsuit between Elon Musk and OpenAI over its for-profit model shift.

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.