California's AI Push in Schools, Uruguay's Human Rights Treaty, and OpenAI's GPT-5 Safety Measures - AI Policy Brief #34

September 09, 2025 - AI Policy Brief
Hi there,

Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, government policies, and compliance requirements worldwide. This week, we're covering a range of topics from national and international policy changes to advancements in AI safety and security. Notably, the California Education Department has formed an AI workgroup to integrate AI into schools, while the North Carolina Governor has established an AI Leadership Council to guide the state's AI initiatives.

On the international front, Uruguay has signed a treaty focusing on AI and human rights, and China is enforcing new rules for AI content labeling. Meanwhile, the UN has launched new AI governance bodies to oversee global AI developments. In the realm of AI safety, OpenAI is set to implement GPT-5 for sensitive chats and parental controls, aiming to enhance user security and privacy. Stay tuned for more insights and updates in this rapidly evolving field.

National Policy

The California Department of Education has formed a workgroup to explore AI integration in K-12 schools, following Senate Bill 1288. Meanwhile, North Carolina Governor Josh Stein has established an AI Leadership Council to guide AI policy and implementation across the state.

International Policy

The International Pharmaceutical Federation (FIP) has issued a policy statement on AI use in pharmacy, emphasizing responsible practices to maintain patient trust. Meanwhile, Uruguay has signed the Council of Europe's AI and Human Rights treaty, marking a significant step in aligning AI systems with human rights and democratic values.

Regulatory Actions

The California Legislature is advancing bills to regulate social media and AI chatbots, focusing on child protection and mental health, while the European Commission has initiated a consultation process to develop guidelines for transparent AI systems as part of the Artificial Intelligence Act implementation.

Defense & Security

The UK's National Cyber Security Centre and the AI Security Institute are promoting public disclosure initiatives to address AI security threats, emphasizing the importance of crowd-sourced efforts. Meanwhile, Ukraine has deployed AI-controlled drone swarms against Russian forces, utilizing technology from the startup Swarmer to autonomously coordinate military operations.

AI Safety

OpenAI plans to route sensitive conversations to GPT-5 and introduce parental controls to enhance safety, allowing parents to manage their children's AI interactions. Meanwhile, Common Sense Media has rated Google's Gemini AI products as 'high risk' for youth, citing concerns over inappropriate content, with Google noting existing safeguards for users under 18.

Court Cases, Hearings and Lawsuits

Greystar Management Services has settled antitrust claims related to its use of RealPage's pricing software, agreeing to restrictions on revenue management practices.
Alphabet Inc shares increased by 8% following a favorable antitrust ruling.
Scale AI is suing a former employee and Mercor for alleged data misappropriation.
Anthropic settled a $1.5 billion lawsuit with authors over copyright issues.
Apple faces a lawsuit for allegedly using pirated books to train AI models.
Warner Bros. Discovery has sued Midjourney over AI-generated images of its characters.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.