Trump's New AI Guidelines, EU's Systemic AI Risk Rules, and AMD's China Export Resumption - AI Policy Brief #27

July 22, 2025 - AI Policy Brief
Hi there,

Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, the Trump Administration is set to announce new guidelines for AI, aiming to provide a clearer framework for the technology's integration into various sectors. Meanwhile, the UK has launched the Isambard-AI Supercomputer, a significant step forward in using AI for disease detection.

On the international front, the EU has issued guidelines for AI models that pose systemic risks, while Meta has declined to sign the EU's AI Code of Practice. In the realm of AI safety, a UN Report is urging stronger measures for deepfake detection, highlighting the growing concern over AI-generated content. Stay informed as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.

National Policy

The Trump Administration is preparing to unveil new AI policy guidelines aimed at reducing regulations and increasing energy sources for data centers, with an announcement expected on July 23, 2025.
The UK Government has introduced the Isambard-AI supercomputer in Bristol, a £225 million project to improve disease detection in dairy cows and skin cancer diagnostics.
The U.S. Food and Drug Administration is forming two AI councils to manage its internal AI applications and create policies for AI in products it regulates, following successful AI tool pilots.

International Policy

Meta Platforms has chosen not to sign the European Union's Code of Practice for General Purpose AI, citing legal uncertainties. The European Commission has released guidelines for AI models with systemic risks to comply with the upcoming EU AI Act. Advanced Micro Devices plans to resume AI chip exports to China, pending U.S. approval. Anthropic intends to sign the EU's AI Code of Practice. The European Commission is inviting applications for its AI Act Advisory Forum. The SIIA seeks clarification on the EU AI Act for EdTech. The European Parliament urges copyright reform for AI training.

Regulatory Actions

Autoriteit Persoonsgegevens will launch a regulatory sandbox in the Netherlands by August 2026 to test AI systems under the European AI Act. The California Judicial Council has set new AI guidelines for courts, mandating a use policy by December 15. The UK government plans a £1 billion investment to enhance AI infrastructure over five years. The British Standards Institution introduces a new AI audit standard for reliable assessments.

Defense & Security

FINN Partners has introduced 'CANARY FOR CRISIS', an AI-powered platform aimed at helping communication teams tackle narrative manipulation and reputational threats.
Nvidia's AI chip sale to the United Arab Emirates is delayed due to U.S. national security concerns over potential smuggling to China.

Innovation & Investment

Bloomberg Government has introduced 'Federal Funding Flow', an AI tool for federal budget management. The UK Tax Authority is expanding AI use for tax compliance. The U.S. Department of Energy held a summit on AI's role in nuclear energy. The Delhi Government proposed an industrial policy with AI incentives. OpenAI and the UK government are partnering for AI investment. Dutch publishers and TNO are developing the GPT-NL AI model.

AI Safety

Impel has introduced a domain-tuned LLM and the Archias research initiative to improve AI safety in the automotive sector, achieving a 20% accuracy boost in customer applications.
United Nations has released a report calling for advanced tools to detect AI-generated deepfakes, highlighting risks such as election interference and financial fraud.
Digital Cooperation Organization has launched the DCO AI Ethics Evaluator, a framework to assist in ensuring ethical AI practices, unveiled at the AI for Good Summit 2025.

Court Cases, Hearings and Lawsuits

OpenAI is contesting the jurisdiction of an Ontario court in a copyright lawsuit filed by Canadian news organizations. A US judge has allowed authors to sue Anthropic over alleged copyright violations in training the Claude chatbot. The Kerala High Court has prohibited AI in judicial decisions, stressing human oversight. The High Court has ruled on AI misuse in legal cases, highlighting the need for verification of AI-generated content. David Sacks faces scrutiny over potential conflicts of interest in AI and crypto investments.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.