
Trump's New AI Guidelines, EU's Systemic AI Risk Rules, and AMD's China Export Resumption - AI Policy Brief #27
July 22, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, the Trump Administration is set to announce new guidelines for AI, aiming to provide a clearer framework for the technology's integration into various sectors. Meanwhile, the UK has launched the Isambard-AI Supercomputer, a significant step forward in using AI for disease detection.
On the international front, the EU has issued guidelines for AI models that pose systemic risks, while Meta has declined to sign the EU's AI Code of Practice. In the realm of AI safety, a UN Report is urging stronger measures for deepfake detection, highlighting the growing concern over AI-generated content. Stay informed as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, the Trump Administration is set to announce new guidelines for AI, aiming to provide a clearer framework for the technology's integration into various sectors. Meanwhile, the UK has launched the Isambard-AI Supercomputer, a significant step forward in using AI for disease detection.
On the international front, the EU has issued guidelines for AI models that pose systemic risks, while Meta has declined to sign the EU's AI Code of Practice. In the realm of AI safety, a UN Report is urging stronger measures for deepfake detection, highlighting the growing concern over AI-generated content. Stay informed as we delve into these stories and more, providing you with the insights you need to navigate the evolving landscape of AI policy.
National Policy
The Trump Administration is preparing to unveil new AI policy guidelines aimed at reducing regulations and increasing energy sources for data centers, with an announcement expected on July 23, 2025.
The UK Government has introduced the Isambard-AI supercomputer in Bristol, a £225 million project to improve disease detection in dairy cows and skin cancer diagnostics.
The U.S. Food and Drug Administration is forming two AI councils to manage its internal AI applications and create policies for AI in products it regulates, following successful AI tool pilots.
The UK Government has introduced the Isambard-AI supercomputer in Bristol, a £225 million project to improve disease detection in dairy cows and skin cancer diagnostics.
The U.S. Food and Drug Administration is forming two AI councils to manage its internal AI applications and create policies for AI in products it regulates, following successful AI tool pilots.
- Trump Administration AI Guidelines to Ease Regulations
- House Passes AI Pilot Bill for Consumer Safety
- UK Government Launches Isambard-AI Supercomputer
- FDA Forms AI Councils for Internal Use and Policy
- Texas Enacts Law on Electronic Health Records and AI
International Policy
Meta Platforms has chosen not to sign the European Union's Code of Practice for General Purpose AI, citing legal uncertainties. The European Commission has released guidelines for AI models with systemic risks to comply with the upcoming EU AI Act. Advanced Micro Devices plans to resume AI chip exports to China, pending U.S. approval. Anthropic intends to sign the EU's AI Code of Practice. The European Commission is inviting applications for its AI Act Advisory Forum. The SIIA seeks clarification on the EU AI Act for EdTech. The European Parliament urges copyright reform for AI training.
- Meta Declines EU AI Code of Practice
- European Commission Issues AI Guidelines for Systemic Risks
- AMD to Resume AI Chip Exports to China
- Anthropic to Sign EU AI Code of Practice
- European Commission Opens AI Act Advisory Forum Applications
- SIIA Seeks Clarification on EU AI Act for EdTech
- EU Parliament Urges Copyright Reform for AI Training
Regulatory Actions
Autoriteit Persoonsgegevens will launch a regulatory sandbox in the Netherlands by August 2026 to test AI systems under the European AI Act. The California Judicial Council has set new AI guidelines for courts, mandating a use policy by December 15. The UK government plans a £1 billion investment to enhance AI infrastructure over five years. The British Standards Institution introduces a new AI audit standard for reliable assessments.
- Netherlands to Launch AI Regulatory Sandbox by 2026
- California Judicial Council Sets AI Rules for Courts
- Britain Invests $1.3 Billion in AI Infrastructure
- BSI Introduces AI Audit Standard
Defense & Security
FINN Partners has introduced 'CANARY FOR CRISIS', an AI-powered platform aimed at helping communication teams tackle narrative manipulation and reputational threats.
Nvidia's AI chip sale to the United Arab Emirates is delayed due to U.S. national security concerns over potential smuggling to China.
Nvidia's AI chip sale to the United Arab Emirates is delayed due to U.S. national security concerns over potential smuggling to China.
Innovation & Investment
Bloomberg Government has introduced 'Federal Funding Flow', an AI tool for federal budget management. The UK Tax Authority is expanding AI use for tax compliance. The U.S. Department of Energy held a summit on AI's role in nuclear energy. The Delhi Government proposed an industrial policy with AI incentives. OpenAI and the UK government are partnering for AI investment. Dutch publishers and TNO are developing the GPT-NL AI model.
- Bloomberg Government Launches AI Tool for Federal Budget
- UK Tax Authority Enhances AI for Tax Compliance
- U.S. Department of Energy Hosts AI and Nuclear Energy Summit
- Delhi Government Proposes Industrial Policy with AI Incentives
- OpenAI Partners with UK Government for AI Investment
- Dutch News Publishers and TNO Develop GPT-NL
AI Safety
Impel has introduced a domain-tuned LLM and the Archias research initiative to improve AI safety in the automotive sector, achieving a 20% accuracy boost in customer applications.
United Nations has released a report calling for advanced tools to detect AI-generated deepfakes, highlighting risks such as election interference and financial fraud.
Digital Cooperation Organization has launched the DCO AI Ethics Evaluator, a framework to assist in ensuring ethical AI practices, unveiled at the AI for Good Summit 2025.
United Nations has released a report calling for advanced tools to detect AI-generated deepfakes, highlighting risks such as election interference and financial fraud.
Digital Cooperation Organization has launched the DCO AI Ethics Evaluator, a framework to assist in ensuring ethical AI practices, unveiled at the AI for Good Summit 2025.
- Impel Unveils Automotive AI LLM and Safety Initiative
- UN Report Urges Stronger Deepfake Detection Measures
- DCO Launches AI Ethics Tool
Court Cases, Hearings and Lawsuits
OpenAI is contesting the jurisdiction of an Ontario court in a copyright lawsuit filed by Canadian news organizations. A US judge has allowed authors to sue Anthropic over alleged copyright violations in training the Claude chatbot. The Kerala High Court has prohibited AI in judicial decisions, stressing human oversight. The High Court has ruled on AI misuse in legal cases, highlighting the need for verification of AI-generated content. David Sacks faces scrutiny over potential conflicts of interest in AI and crypto investments.
- OpenAI Challenges Ontario Court in Copyright Lawsuit
- US Judge Allows Authors to Sue Anthropic for Copyright Infringement
- Kerala High Court Prohibits AI in Judicial Decisions
- High Court Rules on AI Misuse in Legal Cases
- David Sacks Faces Conflict of Interest Scrutiny
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Regulation
08/15
PrivacyCheq Introduces aiCheq for AI Privacy Compliance
08/14
U.S. Government Considers Stake in Intel to Boost Domestic Manufacturing
08/13
OpenAI Urges California to Harmonize AI Regulations with Federal Standards
08/12
SPQR Technologies Introduces 'Living Law' for AI Governance
08/12
Nvidia and AMD to Pay 15% of China AI Chip Sales to US Government
More from: Data Protection & Privacy
08/15
PrivacyCheq Introduces aiCheq for AI Privacy Compliance
08/13
O.NE People and Prighter Partner for AI-Driven Privacy Compliance
08/07
MIND Launches Autonomous DLP Platform for Simplified Data Protection
07/08
Pimloc Secures €4.2 Million for AI Video Privacy Expansion
05/06
Protopia AI and Lambda Partner for Secure LLM Inference