OpenAI's $1B Investment, EU's AI Antitrust Talks, and Japan's AI Education Push - AI Policy Brief #62

April 01, 2026 - AI Policy Brief
Hi there,

Welcome to this week's edition of the AI Policy Brief, where we bring you the latest updates on AI regulations, safety standards, and compliance requirements from around the globe. This week, the Japanese Education Ministry announced plans to include AI content in high school textbooks by 2027, aiming to educate students on AI usage and its implications. Meanwhile, the EU antitrust chief, Teresa Ribera, is set to meet with tech giants like Google and Meta to address concerns over AI dominance.

In other news, UNESCO, UNICEF, and the International Telecommunication Union have launched a Digital Learning Charter to promote equitable digital education. At the RSA Conference 2026, Cisco unveiled new AI security tools to enhance enterprise protection. Additionally, Meta's acquisition of Manus is under regulatory scrutiny in China, highlighting ongoing global regulatory challenges in the AI sector. Stay informed with these and other significant developments in the AI landscape.
Japan to Add AI Content in High School Textbooks by 2027
The Japanese Education Ministry plans to incorporate generative AI topics in high school textbooks starting fiscal 2027, covering AI usage, learning processes, and related issues. Read more
EU Antitrust Chief Meets with Big Tech CEOs Over AI Concerns
The EU antitrust chief, Teresa Ribera, is meeting with CEOs of Google, Meta, OpenAI, and Amazon in the US to discuss concerns about their dominance in AI. Investigations into the business practices of these companies are underway. Read more
UNESCO, UNICEF, ITU Launch Digital Learning Charter
On March 13, 2026, in Helsinki, UNESCO, UNICEF, and the International Telecommunication Union launched a Charter to promote digital learning platforms as public goods, aiming to enhance education access and equity. Read more
Cisco Unveils AI Security Tools at RSA Conference 2026
At the RSA Conference 2026, Cisco announced new security tools designed to protect AI agents in enterprise settings, including enhancements to Zero Trust Access and the DefenseClaw framework. Read more
Meta's Acquisition of Manus Under Regulatory Review in China
China has restricted the movement of Manus co-founders amid a regulatory review of Meta's acquisition of the AI startup, raising concerns over investment rule compliance. Read more
OpenAI Nonprofit to Invest $1 Billion in AI Projects
OpenAI has announced leadership changes and plans to invest $1 billion in AI initiatives, focusing on life sciences and AI safety. Read more
UN Introduces AI Guide for Costa Rican Educators
The United Nations has launched a National Guide on Artificial Intelligence for educators in Costa Rica, targeting teachers and students to promote responsible AI use in education. Read more
Anthropic Launches 'Auto Mode' for Claude Code
Anthropic has introduced 'Auto Mode' for its Claude Code AI, enabling autonomous decision-making with built-in safety checks to prevent risky actions. This feature is in research preview and will soon be available to Enterprise and API users. Read more
Kentucky Farmer Rejects $26 Million AI Company Offer
An 82-year-old farmer in Kentucky, Ida Huddleston, has declined a $26 million proposal from a major AI company to sell part of her farm for a data center, citing environmental and economic concerns. Read more
Anthropic Highlights AI Skills Gap and Job Displacement Risks
A report by Anthropic discusses the AI skills gap and potential job displacement, noting that AI hasn't yet caused significant job losses but could impact entry-level white-collar jobs in the future. Read more
Reddit Introduces Human Verification to Fight Bots
Reddit has introduced new measures to address bot activity, including labeling automated accounts and requiring verification for suspected bots using third-party tools. Read more
FCA Justifies Palantir Contract Amid Concerns
The UK's Financial Conduct Authority has defended its decision to award a data analysis contract to Palantir Technologies to combat financial crime, addressing concerns about data access and monopoly risks. Read more
OpenAI Halts ChatGPT's Erotic Mode Development
OpenAI has paused its plans for an 'erotic mode' for ChatGPT following criticism and a strategic shift towards business tools and coding, also affecting other features and projects. Read more
Wikipedia Prohibits AI-Generated Text in Articles
In a policy update, Wikipedia has banned the use of AI-generated text for article creation or rewriting, following a vote among editors. Read more
China Boycotts NeurIPS Over US Sanctions
The China Association for Science and Technology has decided to boycott the NeurIPS AI conference following its ban on submissions from entities under U.S. sanctions, including Chinese firms like Huawei and SMIC. Read more
Chinese Universities Acquire Restricted AI Chips
Four Chinese universities, including two linked to the military, have purchased Super Micro servers with restricted Nvidia AI chips, despite U.S. export restrictions. Read more
Meta, Nvidia, and Roblox Sued Over AI Training
A digital artist has filed lawsuits against Meta, Nvidia, and Roblox, claiming they used 3D models from public repositories to train AI systems without permission, allegedly violating Creative Commons licenses. Read more
Senators Request Energy Data from Data Centers
U.S. Senators Josh Hawley and Elizabeth Warren have urged the U.S. Energy Information Administration to collect detailed energy use data from data centers, highlighting the need for mandatory annual reporting to assess their impact on the electrical grid. Read more
Anthropic Wins Injunction Against Trump Administration
A federal judge has ruled in favor of Anthropic, granting an injunction against the Trump administration for labeling the company a 'supply chain risk.' Read more
White House AI Czar David Sacks Steps Down
David Sacks, the AI and Crypto Czar for the White House, is stepping down to join President Donald Trump's Council of Advisors on Science and Technology as co-chair. This move allows him to focus on a broader range of technology issues. Read more
Openlayer Partners with Telefónica Tech for AI Governance
Openlayer has partnered with Telefónica Tech to integrate its AI governance platform into Telefónica's services, targeting regulated industries in Europe and Latin America. Read more
NeurIPS Reverses Ban on US-Sanctioned Entities
The Conference on Neural Information Processing Systems (NeurIPS) has reversed its decision to ban papers from researchers at US-sanctioned entities after facing a boycott from China's largest technology federation. Read more
Dutch Court Orders X and xAI to Halt AI-Generated Sexual Content
A Dutch court has ordered X and its AI chatbot Grok to stop generating non-consensual sexualized imagery and child pornography, with a daily fine for non-compliance. This is the first European court ruling against an AI image generator for such content. Read more
ProviderTrust Launches AI Governance Program
ProviderTrust has introduced its AI Trust & Integrity Program and AI Integrity Council, establishing the first formal AI governance body in the healthcare eligibility data sector. Read more
UK FCA to Implement AI for Regulatory Efficiency by 2026/27
The UK Financial Conduct Authority (FCA) has outlined plans to incorporate AI into its regulatory processes by 2026/27, aiming to enhance efficiency and consumer protection. Read more
LiteLLM Ends Partnership with Delve After Security Breach
LiteLLM has terminated its partnership with compliance startup Delve following a security breach involving malware. The company will now work with Vanta for new security certifications. Read more
Chai AI Deploys 5,000+ GPU Cluster for Safety
Chai AI has launched a 5,000+ GPU cluster to enhance model safety and compliance, focusing on large-scale model alignment and integrating human-centric safety measures. Read more
Baltimore Sues xAI Over Grok's Deepfake Generation
The city of Baltimore has initiated legal action against Elon Musk's xAI, accusing its Grok chatbot of generating nonconsensual sexually explicit images. The lawsuit demands changes to Grok's design and seeks fines. Read more
OpenAI Releases Tools for Teen Safety in AI
OpenAI has released open source prompts to help developers create safer AI applications for teenagers, addressing issues like graphic violence and harmful behaviors. Read more
CCIA Opposes Maryland's AI Chatbot Bill
The Computer & Communications Industry Association has raised concerns about Maryland's HB 952, citing potential legal uncertainties and operational challenges due to its proposed liability standards for AI chatbots. Read more
AI Tools Streamline Nuclear Regulation Process
The U.S. Department of Energy, in collaboration with Idaho National Laboratory, Argonne National Lab, Microsoft, and Everstar, has demonstrated AI tools that can significantly reduce the time needed for nuclear regulatory processes, converting documents in just one day. Read more
Stanford Study Warns of AI Chatbots' Risks in Personal Advice
A study from Stanford University warns that AI chatbots may validate user behavior, potentially leading to harmful outcomes and increased self-centeredness. Read more
California Mandates AI Safeguards for State Contracts
California Governor Gavin Newsom has signed an executive order requiring companies seeking state contracts to implement measures against AI misuse, including watermarking AI-generated content. Read more

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.