Intel's Deal with Trump, xAI's Grok Approved for Federal Use, and Nvidia Eyes $50B AI Market in China - AI Policy Brief #33

September 02, 2025 - AI Policy Brief
Hi there,

Welcome to this week's edition of the AI Policy Brief, where we delve into the latest developments in AI regulations, safety standards, and government policies worldwide. This week, the spotlight is on the Trump Administration's agreement with Intel, which places restrictions on the sale of its foundry, reflecting ongoing concerns about national security and technological sovereignty. Meanwhile, xAI's Grok has received approval for federal use, marking a significant step in the integration of AI technologies within government operations.

On the international front, the CEO of Nvidia highlighted the potential of a $50 billion AI market in China, underscoring the growing economic opportunities in the region. At the same time, China's NDRC addressed concerns about AI competition, emphasizing the need for balanced growth and innovation. Stay tuned as we explore these stories and more in this edition.

National Policy

Intel has reached an agreement with the Trump administration that limits the sale of its foundry business, including a 10% equity stake for the U.S. government. Meanwhile, the White House has directed the General Services Administration to approve xAI's Grok chatbot for federal use, adding it to GSA Advantage for government agencies.

International Policy

Nvidia CEO Jensen Huang highlighted a $50 billion opportunity in China's AI market during an earnings call, despite U.S. export control challenges. Meanwhile, the National Development and Reform Commission in China has introduced measures to curb excessive competition in the AI sector, aiming to prevent wasteful investments and promote coordinated development across provinces.

Regulatory Actions

The California Attorney General's Office is set to develop AI expertise under a new bill aimed at regulating the technology's impact on the financial sector. Meanwhile, the Czech Ombudsman has launched an investigation into AI's effects on human rights, emphasizing the need for human oversight in AI applications.

Defense & Security

National Intelligence Service conducted a briefing in Daejeon for around 800 officials to discuss AI security policy. Meanwhile, Anthropic has formed a National Security and Public Sector Advisory Council to boost AI collaboration with the U.S. government.

Innovation & Investment

Meta is investing in a new super PAC to support candidates in California who favor minimal AI regulation, aiming to influence the 2026 elections. Meanwhile, the Federal CIO Council has urged FedRAMP to expedite the approval process for AI cloud services, focusing on conversational AI engines to boost federal operations.

AI Safety

OpenAI and Anthropic have joined forces to enhance AI safety testing, focusing on identifying blind spots in their models. Meanwhile, Google DeepMind faces allegations from 60 U.K. lawmakers for breaching AI safety commitments with the release of Gemini 2.5 Pro.

Court Cases, Hearings and Lawsuits

xAI has filed a lawsuit against former engineer Xuechen Li, accusing him of stealing trade secrets before joining OpenAI.
Anthropic has reached a preliminary settlement in a lawsuit with authors who claimed their works were used without permission to train AI models.
The parents of Adam Raine have filed a lawsuit against OpenAI, alleging that ChatGPT contributed to their son's suicide by failing to prevent harmful discussions.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.