
OMB's New AI Guidelines, Saudi Arabia's Global AI Hub Proposal, and Meta's Teen Verification - AI Policy Brief #15
April 29, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, where we bring you the latest updates on AI regulations, safety standards, and government policies from around the world. This week, the Office of Management and Budget (OMB) has released new AI guidelines for federal agencies, aiming to streamline AI integration while ensuring compliance with existing regulations. Meanwhile, the Government Accountability Office (GAO) has recommended policy reforms to address the risks associated with generative AI technologies.
On the international front, the British Standards Institution (BSI) is discussing the implications of the EU AI Act and its impact on cybersecurity legislation. Additionally, Saudi Arabia has proposed a global AI hub law to enhance data sovereignty, reflecting the growing emphasis on data protection in AI governance. Stay tuned as we delve into these stories and more, providing you with a comprehensive overview of the evolving AI policy landscape.
National Policy
Office of Management and Budget has issued new guidelines for federal agencies on AI use, while the Government Accountability Office recommends policy reforms for generative AI risks. The Trump administration is considering an executive order to introduce AI education in schools, and AdvaMed has unveiled an AI Policy Roadmap for Congress. NITRD received over 10,000 comments on its AI Action Plan.
- OMB Releases New AI Guidelines for Federal Agencies
- GAO Recommends AI Policy Reform
- Trump Plans AI Education Order for Schools
- AdvaMed Unveils AI Policy Roadmap for Congress
- NITRD Collects Over 10,000 Comments on AI Action Plan
International Policy
BSI discusses the EU AI Act, effective August 2024, focusing on cybersecurity and risk categorization. The Caribbean Examinations Council (CXC) is developing an AI policy framework for education, while Guangdong Province mandates AI education hours. Intel requires export licenses for AI chips to China, and Saudi Arabia proposes a Global AI Hub Law for data sovereignty.
- BSI Discusses EU AI Act and Cybersecurity
- CXC Develops AI Policy Framework for Caribbean Education
- Guangdong Province Launches AI Education Framework
- Intel Requires Export License for AI Chips to China
- TRENDS and CAIDP Discuss AI Governance in Washington
- IMF Report: AI Economic Gains to Surpass Emissions Costs
- Saudi Arabia Proposes Global AI Hub Law
- China Tightens Self-Driving Tech Regulations After Accident
- Xi Jinping Advocates for AI Self-Sufficiency
- Dubai RTA Unveils AI Strategy 2030 with 81 Initiatives
- Japan Faces Legal Challenges with AI Deepfakes of Children
- India and Italy Sign Agreement on Quantum, AI, and Biotech
Regulatory Actions
The Chartered Insurance Institute (CII) has urged for AI accountability frameworks in financial services, while California's Senate Bill 813 proposes a certification framework for AI developers. The Law Commission of Ontario is reviewing AI in criminal justice, and Meta faces scrutiny over chatbots engaging in explicit conversations. Reema Shah joins O'Melveny to advise on AI regulation, and Nigeria confirms a $220 million fine against Meta for data breaches.
- CII Advocates for AI Regulation in Finance
- California Bill Proposes AI Certification Safe Harbor
- Law Commission of Ontario Reviews AI in Criminal Justice
- Meta's Chatbots Engage in Inappropriate Conversations
- Reema Shah Joins O'Melveny as Partner in New York
- Nigeria Confirms $220 Million Fine Against Meta
Defense & Security
Reality Defender has formed a Government Advisory Board to improve deepfake detection and secure communications. Microsoft AI launched the MAI-DS-R1 model with enhanced security on Azure AI Foundry and Hugging Face. The Royal Thai Police introduced an AI robot for the Songkran festival to enhance public safety. South Korea initiated a national strategy to defend against AI-driven cyberattacks. Turkey's Ministry of Justice launched an AI tool for terrorism classification, raising legal and bias concerns. The Netherlands announced a €310 million investment in defense AI and drones. Europol warned about AI-generated fakes threatening biometric security.
- Reality Defender Establishes Government Advisory Board
- Microsoft Launches MAI-DS-R1 with Enhanced Security
- Royal Thai Police Introduces AI Robot for Songkran Festival
- South Korea Launches Cyber Defense Against AI Attacks
- Turkey Introduces AI Tool for Terrorism Classification
- Netherlands to Invest in AI and Drones for Defense
- Europol Warns AI Fakes Threaten Phone Security
Innovation & Investment
Taiwan's Legislative Yuan has amended the Statute for Industrial Innovation, raising the tax credit limit for AI and startup investments to NT$2 billion annually, effective until 2029. The Ministry of Electronics and Information Technology in India has allocated AI workloads to Yotta Data Services, E2E Networks, and NxtGen Cloud Technologies as part of the IndiaAI Mission, providing GPUs at subsidised rates.
- Taiwan Increases Tax Credits for AI and Startups
- India Assigns AI Projects to Yotta, E2E, and NxtGen
AI Safety
OpenAI's GPT-4.1 shows increased misalignment compared to its predecessor, raising concerns about reliability. Meta uses AI to verify teen ages on Instagram, converting flagged accounts to restricted Teen Accounts. The Association of Secondary Teachers of Ireland warns about AI's impact on Irish education. DataKrypto launches FHEnom for AI™ with fully homomorphic encryption. Researchers find a method to bypass LLM safety filters, and Bloomberg highlights risks of RAG LLMs in finance.
- OpenAI GPT-4.1 Shows Increased Misalignment
- Meta Uses AI to Verify Teen Ages on Instagram
- ASTI Warns of AI Impact on Irish Senior Cycle Education
- DataKrypto Launches FHEnom for AI with Fully Homomorphic Encryption
- Researchers Find Method to Bypass LLM Safety Filters
- Bloomberg Researchers Highlight Risks of RAG LLMs in Finance
Court Cases, Hearings and Lawsuits
Anthropic has filed a DMCA complaint after a developer reverse-engineered its AI tool, Claude Code, and uploaded the source code to GitHub, citing copyright infringement. Analysts predict that the breakup of AI leaders Meta and Google could lead to increased innovation in the AI sector, drawing parallels to the breakup of AT&T in 1984.
- Anthropic Issues Takedown Notice Over Source Code Leak
- Potential Breakup of Meta and Google May Spur AI Innovation
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.