Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, we're diving into significant international and national policy updates, including a major announcement from French President Emmanuel Macron, who has unveiled a €20 billion investment plan to boost AI innovation in Europe. This move is part of a broader strategy to position Europe as a leader in AI technology and ensure competitive growth in the global market.
In addition, we'll explore the latest insights from the defense sector, where a report by Thales highlights AI and quantum threats as top security concerns. As AI continues to evolve, understanding its implications on national security remains crucial. Stay tuned as we unpack these stories and more, providing you with the essential information you need to navigate the rapidly changing landscape of AI policy.
International Policy
The African Union and the Government of Ethiopia hosted a High-Level Policy Dialogue on AI, focusing on investment and innovation to support Africa's development. Sri Lanka is set to unveil a white paper detailing its legislative roadmap for artificial intelligence in the next few months, aiming to create a regulatory framework inspired by successful international models.
Luka Inc. has been fined $5.6 million by Italy for data breaches involving its AI chatbot Replika. The US Justice Department is investigating Google for potential antitrust violations in its deal with Character.AI. Workers from Amazon and Google are opposing a proposed 10-year freeze on state-level AI regulations.
Thales highlights AI and quantum threats in its 2025 Data Threat Report, emphasizing concerns over generative AI and encryption vulnerabilities. President Donald J. Trump orders the deployment of advanced nuclear reactors to bolster national security. Google DeepMind enhances security for Gemini 2.5, focusing on indirect prompt injection attacks with new strategies.
The U.S. Department of Commerce announced new export controls on AI technologies, while China introduced a national AI ethics committee to oversee AI development.
Ilya Sutskever, former chief scientist of OpenAI, planned a doomsday bunker due to AGI risks. Anthropic has activated AI Safety Level 3 for Claude Opus 4 to prevent misuse in CBRN weapon development. The Civic AI Security Program demonstrated AI misuse risks to California lawmakers, emphasizing regulatory needs.
Adam Thierer from the R Street Institute testified before the U.S. House Subcommittee, advocating for a pause on new AI regulations. Representative Marjorie Taylor Greene criticized Elon Musk's AI chatbot Grok for controversial statements. A German court ruled that Meta can use user data for AI training, while a federal judge ordered OpenAI to preserve data in a copyright case with The New York Times.