
Pony.ai's Luxembourg Milestone, White House AI Flexibility, and NIH's Genomic Data Guidance - AI Policy Brief #12
April 08, 2025 -
AI Policy Brief
Hi there,
Welcome to this week's edition of the AI Policy Brief, your go-to source for the latest updates on AI regulations, safety standards, and government policies worldwide. This week, we delve into significant developments such as Pony.ai Europe receiving a permit to test its robotaxi services in Luxembourg, marking a pivotal step in autonomous vehicle deployment in Europe. Meanwhile, the White House has announced new measures to enhance AI adoption flexibility for federal agencies, aiming to streamline AI integration across various governmental functions.
In the realm of AI safety, the National Institutes of Health (NIH) has issued new guidance on the use of genomic data in AI, reflecting growing concerns about data privacy and ethical AI use. Additionally, the Hong Kong Privacy Office has released guidelines on the use of generative AI, emphasizing the importance of safeguarding personal data. Stay tuned as we explore these stories and more in this edition.
National Policy
Pony.ai Europe has been granted a permit for Level 4 autonomous driving testing by Luxembourg's Ministry of Mobility and Public Works, marking a step forward in their European operations. The White House Office of Management and Budget has announced that federal agencies will now have more flexibility in adopting AI technologies, following an executive order from President Trump.
- Pony.ai Gets Luxembourg Permit for Level 4 Testing
- White House Expands AI Adoption Flexibility for Federal Agencies
International Policy
The EU’s Community of Practice on Public Procurement of AI has updated AI Model Contractual Clauses to assist public organizations in AI system procurement. U.S. President Donald Trump has introduced tariffs on tech equipment, potentially affecting Big Tech's AI infrastructure plans. The Tony Blair Institute recommends the UK ease AI copyright laws to avoid harming US relations.
- EU Updates AI Model Contractual Clauses for Public Procurement
- Trump Tariffs May Affect Big Tech Data Center Plans
- Tony Blair Institute Urges UK to Ease AI Copyright Laws
Regulatory Actions
Builder.ai has hired two auditing firms to review its finances after lowering revenue estimates due to allegations of inflated sales figures. OpenAI has submitted its response to the UK Government's AI and Copyright consultation, advocating for a broad text and data mining exception to support AI innovation and investment. Waymo is exploring the use of data from its robotaxis, including interior camera footage, to train AI models and personalize ads.
- Builder.ai Initiates Audit Amid Sales Allegations
- OpenAI Advocates for Broad TDM Exception in UK Consultation
- Waymo Considers Using Robotaxi Camera Data for AI and Ads
Innovation & Investment
Piyush Goyal announced the Startup India Desk within DPIIT for startup support. Anthropic's Claude models are now FedRAMP High approved on Google Cloud. US Federal Reserve Governor Michael Barr emphasized AI's role in banking at a San Francisco Fed conference. The Stanford Institute for Human-Centered AI released its 2025 AI Index, noting U.S. and China's AI advancements. OpenAI is forming an expert group for its nonprofit to for-profit transition.
- Startup India Desk Established in DPIIT
- Anthropic's Claude Models Approved for FedRAMP High on Google Cloud
- Google Unveils 'Sec-Gemini v1' for Cybersecurity
- US Federal Reserve Governor on AI in Banking
- Stanford AI Index 2025 Highlights AI Growth
- OpenAI Forms Expert Group for Nonprofit Transition
AI Safety
The National Institutes of Health has issued new guidelines for genomic data use in AI, focusing on data protection. The Office of the Privacy Commissioner for Personal Data in Hong Kong has released guidelines for safe Generative AI use by employees. Researchers have found that OpenAI's models might memorize copyrighted content, raising transparency concerns. Meta's VP of generative AI has denied allegations of manipulating Llama 4 benchmark scores. Anthropic has updated its AI model security safeguards to prevent misuse. DeepMind has released a paper on AGI safety, predicting development by 2030. Google has accelerated the release of its Gemini AI models, raising transparency issues.
- Hong Kong Privacy Office Issues Guidelines on Generative AI
- Study Finds OpenAI Models May Memorize Copyrighted Content
- Meta Exec Denies Boosting Llama 4 Scores
- Anthropic Updates AI Model Security Safeguards
- Meta's AI Model Maverick Faces Benchmark Criticism
- NIH Issues Guidance on Genomic Data in AI
- DeepMind Releases Paper on AGI Safety and Risks
- Google Accelerates Gemini AI Model Releases
Court Cases, Hearings and Lawsuits
OpenAI is facing allegations from the AI Disclosures Project for using copyrighted content from O'Reilly Media books in its GPT-4o model without permission. Over 30 lawsuits are currently challenging the use of copyrighted materials in Generative AI training, with outcomes pending in US federal courts. Meanwhile, a House subcommittee hearing revealed differing views on AI regulation between House Republicans and Democrats.
- OpenAI Allegedly Used Paywalled Books for AI Training
- Copyright Lawsuits Over AI Training
- House GOP and Democrats Debate AI Regulation
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.