
France's €20B AI Boost, Thales Warns on AI Threats, and Sutskever's AGI Bunker Plan - AI Policy Brief #19
May 27, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, we're diving into significant international and national policy updates, including a major announcement from French President Emmanuel Macron, who has unveiled a €20 billion investment plan to boost AI innovation in Europe. This move is part of a broader strategy to position Europe as a leader in AI technology and ensure competitive growth in the global market.
In addition, we'll explore the latest insights from the defense sector, where a report by Thales highlights AI and quantum threats as top security concerns. As AI continues to evolve, understanding its implications on national security remains crucial. Stay tuned as we unpack these stories and more, providing you with the essential information you need to navigate the rapidly changing landscape of AI policy.
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest developments in AI regulations, safety standards, and government policies worldwide. This week, we're diving into significant international and national policy updates, including a major announcement from French President Emmanuel Macron, who has unveiled a €20 billion investment plan to boost AI innovation in Europe. This move is part of a broader strategy to position Europe as a leader in AI technology and ensure competitive growth in the global market.
In addition, we'll explore the latest insights from the defense sector, where a report by Thales highlights AI and quantum threats as top security concerns. As AI continues to evolve, understanding its implications on national security remains crucial. Stay tuned as we unpack these stories and more, providing you with the essential information you need to navigate the rapidly changing landscape of AI policy.
International Policy
The African Union and the Government of Ethiopia hosted a High-Level Policy Dialogue on AI, focusing on investment and innovation to support Africa's development. Sri Lanka is set to unveil a white paper detailing its legislative roadmap for artificial intelligence in the next few months, aiming to create a regulatory framework inspired by successful international models.
Regulatory Actions
Luka Inc. has been fined $5.6 million by Italy for data breaches involving its AI chatbot Replika. The US Justice Department is investigating Google for potential antitrust violations in its deal with Character.AI. Workers from Amazon and Google are opposing a proposed 10-year freeze on state-level AI regulations.
- Luka Inc. Fined $5.6 Million by Italy for Data Breaches
- Google Under DOJ Investigation for Character.AI Deal
- Amazon and Google Workers Oppose AI Regulation Freeze
Defense & Security
Thales highlights AI and quantum threats in its 2025 Data Threat Report, emphasizing concerns over generative AI and encryption vulnerabilities. President Donald J. Trump orders the deployment of advanced nuclear reactors to bolster national security. Google DeepMind enhances security for Gemini 2.5, focusing on indirect prompt injection attacks with new strategies.
- Thales Report Identifies AI and Quantum as Key Security Concerns
- Trump Orders Deployment of Advanced Nuclear Reactors for National Security
- Google DeepMind Enhances Security for Gemini 2.5
Innovation & Investment
The U.S. Department of Commerce announced new export controls on AI technologies, while China introduced a national AI ethics committee to oversee AI development.
- France Announces €20 Billion AI Investment
- Greater Vancouver Board of Trade Advocates for AI Credentialing and Tax Credits
AI Safety
Ilya Sutskever, former chief scientist of OpenAI, planned a doomsday bunker due to AGI risks. Anthropic has activated AI Safety Level 3 for Claude Opus 4 to prevent misuse in CBRN weapon development. The Civic AI Security Program demonstrated AI misuse risks to California lawmakers, emphasizing regulatory needs.
- OpenAI's Ilya Sutskever Planned Bunker for AGI Risks
- Anthropic Implements AI Safety Level 3 for Claude Opus 4
- Civic AI Security Program Highlights AI Risks to Lawmakers
Court Cases, Hearings and Lawsuits
Adam Thierer from the R Street Institute testified before the U.S. House Subcommittee, advocating for a pause on new AI regulations. Representative Marjorie Taylor Greene criticized Elon Musk's AI chatbot Grok for controversial statements. A German court ruled that Meta can use user data for AI training, while a federal judge ordered OpenAI to preserve data in a copyright case with The New York Times.
- R Street Institute's Adam Thierer Testifies on AI Regulation
- Elon Musk's AI Chatbot Grok Criticized by Marjorie Taylor Greene
- German Court Allows Meta to Use User Data for AI Training
- Judge Orders OpenAI to Preserve Data in Copyright Case
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Regulation
08/15
PrivacyCheq Introduces aiCheq for AI Privacy Compliance
08/14
U.S. Government Considers Stake in Intel to Boost Domestic Manufacturing
08/13
OpenAI Urges California to Harmonize AI Regulations with Federal Standards
08/12
SPQR Technologies Introduces 'Living Law' for AI Governance
08/12
Nvidia and AMD to Pay 15% of China AI Chip Sales to US Government
More from: Data Protection & Privacy
08/15
PrivacyCheq Introduces aiCheq for AI Privacy Compliance
08/13
O.NE People and Prighter Partner for AI-Driven Privacy Compliance
08/07
MIND Launches Autonomous DLP Platform for Simplified Data Protection
07/08
Pimloc Secures €4.2 Million for AI Video Privacy Expansion
05/06
Protopia AI and Lambda Partner for Secure LLM Inference