Senators Scrutinize AI Data Centers, OpenAI's Global Initiative, and India's AI Royalty Proposal - AI Policy Brief #49

January 02, 2026 - AI Policy Brief
Hi there,

Welcome to the latest edition of the AI Policy Brief, your go-to source for the latest updates on AI regulations, safety standards, and government policies worldwide. This week, we're diving into a range of topics, including the U.S. House's progress on the SPEED Act, which aims to enhance AI infrastructure, and the European Commission's release of a draft code for AI content labeling. These developments highlight the ongoing efforts to establish a robust framework for AI governance.

Additionally, we'll explore the recent collaboration between NobleReach and the U.S. government to recruit AI talent, as well as OpenAI's new initiative led by George Osborne. These stories reflect the dynamic landscape of AI policy and the global push towards ensuring ethical and secure AI deployment. Stay informed with us as we continue to track these important changes in the AI world.
This week's AI Policy Brief highlights significant developments in AI regulation and governance. OpenAI has appointed former UK Chancellor George Osborne to lead its 'OpenAI for Countries' initiative, aiming to bolster AI capabilities in governments globally. Meanwhile, India has proposed a royalty system for AI companies like OpenAI and Google to compensate for using copyrighted content in model training, reflecting ongoing efforts to balance innovation with intellectual property rights.
Senators Investigate Tech Giants on AI Data Centers' Energy Impact
U.S. Senators are probing the impact of data centers operated by Google, Amazon, and Meta on household energy costs, concerned about rising utility bills due to increased energy demands. Read more
OpenAI Appoints George Osborne for Global AI Initiative
Former UK Chancellor George Osborne has been appointed by OpenAI to lead its 'OpenAI for Countries' program, which aims to enhance AI capacity in governments worldwide. Read more
India Proposes AI Royalty System for Copyrighted Content
India's Department for Promotion of Industry and Internal Trade has suggested a framework for AI companies like OpenAI and Google to pay royalties for using copyrighted content in model training. Read more
State Attorneys General Urge AI Firms to Address Chatbot Issues
U.S. state attorneys general have urged major AI companies like Microsoft, OpenAI, and Google to tackle 'delusional outputs' from AI chatbots, recommending new safeguards and audits. Read more
Oklahoma's 'Behind-the-Meter' Law for AI Data Centers
Oklahoma lawmakers, including Governor Kevin Stitt, support a 2024 law mandating large power users like AI data centers to produce their own electricity to shield residential consumers from higher utility bills. Read more
Regulatory Challenges for AI in Medical Robotics
The publication by WIK examines the regulatory challenges that the AI Regulation (No. 535) presents for medical robotics. It discusses the integration of the AI Act with existing Medical Device Regulation (MDR) and the classification of medical robotics as high-risk AI. Read more
Red Hat Acquires Chatterbox Labs for AI Security
Red Hat has acquired Chatterbox Labs to enhance its AI portfolio with safety and security capabilities, focusing on responsible AI deployment. Read more
Awesome Compliance Technology Secures €1.2m for AI Platform
Dutch startup Awesome Compliance Technology has raised €1.2 million to develop a collaborative AI platform aimed at assisting with compliance for regulations like GDPR and the EU AI Act. Read more
New York Governor Proposes AI Bill Rewrite
New York Governor Kathy Hochul has proposed a rewrite of the RAISE Act, drawing criticism for aligning with tech lobby interests and weakening safety measures. Read more
Cisco Unveils AI Security Framework
Cisco has launched its Integrated AI Security and Safety Framework to address AI security and safety risks. The framework aims to provide a comprehensive approach by integrating various AI threats and offering a unified taxonomy for better communication among stakeholders. Read more
Council of Europe Official Warns of AI's Threat to Democracy
At a London conference, Council of Europe Deputy Secretary General Bjørn Berge discussed the potential risks AI poses to democratic processes, emphasizing the need for stronger parliamentary oversight in AI regulation. Read more
Adobe Faces Lawsuit Over AI Training Practices
A proposed class-action lawsuit has been filed against Adobe for allegedly using pirated books, including works by author Elizabeth Lyon, to train its SlimLM AI model. Read more
NobleReach and U.S. Government Collaborate on AI Talent Recruitment
NobleReach has partnered with the U.S. Government to recruit AI talent for national service, aiming to boost the country's AI capabilities. Read more
NVIDIA Advances Robotaxi AI Safety
NVIDIA is enhancing the safety of robotaxis by using its OpenUSD, Omniverse, and Halos framework to standardize simulation and validation processes. This approach aims to improve autonomous vehicle deployment through synthetic data generation and virtual testing that replicates real-world conditions. Read more
Wodan AI Raises €2M for Encrypted AI Tech
The Spanish-Belgian startup Wodan AI has secured €2 million in funding to advance its AI technology that processes encrypted data, aiding compliance with privacy regulations. Read more
FDA Evaluates AI Mental Health Device Regulations
The FDA's Digital Health Advisory Committee convened to discuss regulatory strategies for generative AI mental health devices, focusing on a risk-based approach to ensure safety and effectiveness. Read more
OpenAI Updates ChatGPT Rules for Teen Safety
OpenAI has introduced new guidelines for ChatGPT to enhance safety for users under 18, including restrictions on roleplay and a focus on real-world support. This move comes amid legislative scrutiny on AI standards for minors. Read more
Oracle and OpenAI Get Green Light for Michigan Data Center
Michigan regulators have approved a new data center project by Oracle and OpenAI in Saline Township, following a hearing on environmental and community concerns. Read more
Google Launches Gemma Scope 2 for AI Safety
Google has released Gemma Scope 2, a suite of interpretability tools aimed at improving the understanding of complex language model behavior. This toolkit is compatible with all sizes of the Gemma 3 model and marks the largest open-source release of interpretability tools by an AI lab. Read more
Softex Observatory Recommends AI Regulation in Brazil
The Softex Observatory has released a policy brief suggesting a risk-based regulatory model for AI in Brazil, inspired by international standards. Key recommendations include a National AI Assessment System and a Trusted AI Seal. Read more
US Reviews Nvidia AI Chip Sales to China
The US government is reviewing potential sales of Nvidia's H200 AI chips to China, which could indicate a policy shift regarding previous restrictions. Read more
U.S. House Advances SPEED Act for AI Infrastructure
The U.S. House of Representatives has moved forward with the SPEED Act, which aims to streamline the federal permitting process for AI data centers and semiconductor facilities. Supported by major tech companies, the bill passed a procedural vote but faces environmental concerns. Read more
Check Point Infinity Global Services Launches AI Security Training
Check Point Infinity Global Services has introduced its first AI security training courses to help organizations combat AI-driven threats. The courses offer various tracks for different levels of security practitioners and developers. Read more
MHRA Seeks Public Input on AI Regulation in UK Healthcare
The MHRA is inviting public contributions to help shape AI regulation in UK healthcare, focusing on patient safety and the future use of AI in the NHS. Read more
European Commission Drafts Code for AI Content Labelling
The European Commission has unveiled a draft Code of Practice for marking and labelling AI-generated content, as part of the upcoming AI Act requirements. Feedback is open until January 2026, with a final version anticipated by June 2026. Read more
EU Proposes AI Regulation Changes for Medical Devices
The European Commission has suggested revisions to streamline regulations for medical devices with AI components, aiming to reduce administrative burdens and improve regulatory coordination. Read more
European Commission Seeks Feedback on AI Act Provisions
The European Commission has launched consultations on copyright obligations for AI providers and the creation of AI regulatory sandboxes under the EU AI Act, inviting stakeholder input until early January 2026. Read more
NIST Drafts AI Cybersecurity Guidelines
The NIST has released a draft of its Cyber AI Profile, providing guidelines for securing AI systems and enhancing cybersecurity. Public feedback is open until January 30, 2026. Read more
Anthropic Introduces Compliance Framework for California AI Law
California's Transparency in Frontier AI Act will take effect in 2025, requiring AI developers to adhere to new safety and transparency standards. Anthropic has unveiled its Frontier Compliance Framework to address these requirements and manage AI risks. Read more
Google Sues SerpAPI for Copyright Violations
On October 10, 2023, Google filed a lawsuit against SerpAPI for allegedly bypassing security measures to scrape copyrighted content from Google search results unlawfully. Read more
FERC Instructs PJM to Develop New Colocation Rules
The US Federal Energy Regulatory Commission has directed PJM Interconnection to create new tariff rules for colocating large electric loads, such as data centers, to ensure fair grid connection and cost allocation. Read more
Anthropic Launches Bloom for AI Model Evaluation
On December 19, 2025, Anthropic introduced Bloom, an open source framework to automate behavioral evaluations of AI models, enhancing the evaluation process for AI alignment. Read more
Seattle DOT Uses AI for Vision Zero Safety Goals
The Seattle Department of Transportation has adopted C3 AI Safety Analysis to enhance road safety and support its Vision Zero initiative, aiming to eliminate traffic fatalities and serious injuries by 2030. Read more
Cedars-Sinai and University Health Network Host AI Symposium
Over 200 international leaders in AI, medicine, and surgery gathered at Cedars-Sinai for the second annual symposium co-hosted with University Health Network, focusing on AI implementation in clinical settings. Read more
CultureAI Joins Microsoft's AI Safety Initiative
CultureAI, a UK-based AI safety company, has been selected for Microsoft's Agentic Launchpad, a program supporting startups in AI safety and control. The initiative involves collaboration with NVIDIA and WeTransact to enhance AI safety in autonomous environments. Read more
New York Governor Signs RAISE Act for AI Safety Regulation
Governor Kathy Hochul of New York has signed the RAISE Act, introducing significant AI safety legislation in the state. The act requires large AI developers to disclose safety protocols and report incidents within 72 hours. Read more
HHS Seeks Public Input on AI in Clinical Care
The U.S. Department of Health and Human Services has issued a Request for Information to gather public input on accelerating AI adoption in clinical care, with comments due by February 21, 2026. Read more
Odisha Government Partners with OpenAI for AI Skill Development
The Odisha Government has partnered with OpenAI to enhance AI skills and promote responsible AI adoption in the state through training programs and AI-powered solutions. Read more
Senator Markey Proposes AI Regulation Bill
Senator Edward J. Markey has introduced the States’ Right to Regulate AI Act to counter an Executive Order from the Trump Administration that restricts state-level AI regulations. Read more
New Jersey and New York Regulate AI Hiring Tools
New Jersey and New York have introduced regulations to ensure ethical use of AI in hiring, focusing on bias and fairness. Read more
Checkmarx Identifies 'Lies-in-the-Loop' Attacks on AI Code Assistants
Researchers at Checkmarx have discovered a new attack method called 'Lies-in-the-Loop' that targets safety mechanisms in AI code assistants, such as Anthropic's Claude Code and Microsoft's Copilot Chat, by exploiting Human-in-the-Loop dialogs. Read more
INDEX SOFT LIMITED Highlights Demand for AI Talent in Fintech
A report by INDEX SOFT LIMITED reveals a growing demand for AI talent in the fintech industry, driven by the adoption of AI technologies and the need for compliance and fairness in AI systems. Read more
Japan's First AI Basic Plan Adopted
The Japanese Government has introduced its inaugural basic plan on AI, focusing on fostering reliable AI development while managing associated risks. Read more
Japan Investigates AI Search Services for Antitrust Violations
The Japan Fair Trade Commission is investigating AI search services by companies like Google and Microsoft over potential antitrust violations concerning copyright issues. Read more
Nvidia to Ship H200 AI Chips to China by February
Nvidia is set to begin shipping its H200 AI chips to China by mid-February, pending government approval. This move follows a U.S. policy shift allowing such sales with a fee, reversing a previous ban. Read more
EPA Launches Resource Page for Data Centers and AI Facilities
The Environmental Protection Agency has launched a webpage to aid developers in the permitting process for data centers and AI projects, as part of efforts to modernize regulations and streamline development. Read more
FINRA Identifies Oversight Gaps in Financial Firms' Use of Generative AI
The Financial Industry Regulatory Authority (FINRA) has reported significant oversight gaps as financial firms adopt generative AI technologies, highlighting issues like inadequate controls and supervision. Read more
Cautio Acquires Bytes for 2-Wheeler Safety
Bengaluru-based visual telematics startup Cautio has acquired Bytes, an AI-powered 2-wheeler safety technology startup, to enhance road safety in India. Read more
John Carreyrou Sues AI Firms Over Copyright Infringement
Investigative reporter John Carreyrou has filed a lawsuit against several AI companies, including xAI, Anthropic, Google, OpenAI, Meta Platforms, and Perplexity, for allegedly using copyrighted books without permission to train their AI systems. Read more
Japan Allocates ¥1 Trillion for AI Development
The Japanese Government plans to invest ¥1 trillion over five years to establish a new company for developing domestic AI, involving firms like SoftBank Group Corp. Read more
South Korea's 700 Billion Won AI Investment for 2026
The Ministry of Trade, Industry and Resources of South Korea has announced a significant investment of 700 billion won for the M.AX Alliance in 2026, aimed at enhancing AI factories and technology. Read more
Italy Halts Meta's Ban on Rival AI Chatbots in WhatsApp
The Italian Competition Authority has directed Meta to stop its policy that bans the use of WhatsApp's business tools for competing AI chatbots, following an investigation into potential market dominance abuse. Read more
HHS Proposes Changes to Health AI Tool Regulations
The U.S. Department of Health and Human Services has proposed new rules to eliminate certain transparency requirements for AI tools in healthcare, aiming to reduce regulatory burdens and costs for developers. Read more
Schaffengott Forms AI Joint Venture in Czech Republic
Korean AI safety firm Schaffengott has partnered with UTP Czech s.r.o. to form a joint venture aimed at enhancing European operations through advanced disaster safety technologies. Read more
US Lawmakers Seek Details on Nvidia H200 Sales to China
Two senior Democratic lawmakers have requested the U.S. Commerce Department to reveal details about license reviews for Nvidia's H200 AI chips sales to Chinese companies, following a policy shift announced by President Trump. Read more
U.S. Lawmakers Request Pentagon to List DeepSeek and Xiaomi
Nine U.S. lawmakers have urged the Pentagon to add DeepSeek and Xiaomi to a list of firms allegedly aiding the Chinese military. This follows a military spending bill signed by President Donald Trump. Read more
Armenian Government Partners with AWS for AI Pilot Program
The Armenian Government has initiated a pilot program in collaboration with Amazon Web Services to support AI startups by providing access to high-performance computing resources. Read more
U.S. Department of Energy Highlights AI and Cybersecurity Risks
A report from the U.S. Department of Energy identifies artificial intelligence and cybersecurity as major management challenges for 2026, emphasizing the need for improved oversight to protect critical infrastructure. Read more
AIM Intelligence, LG, and OpenMind Partner for AI Safety Research
AIM Intelligence is teaming up with LG Electronics and OpenMind to conduct AI safety research in South Korea. The collaboration focuses on identifying dangerous situations for physical AI and creating a comprehensive dataset. Read more
China Warns of AI Deepfake Threats to National Security
China's Ministry of State Security has issued a warning about foreign forces using deepfake technology to create misleading videos. The warning highlights the risks of AI, such as data privacy and algorithmic bias, and calls for stronger security measures. Read more
Taiwan Passes AI Basic Act for Governance
The Legislative Yuan in Taiwan has approved the Artificial Intelligence Basic Act, setting governance principles and designating the National Science and Technology Council as the authority. Read more
China Proposes New AI Regulations
On December 27, 2025, China's cyber regulator released draft rules to oversee AI services that simulate human interaction, focusing on user safety and addiction prevention. Read more
Bernie Sanders Calls for AI Datacenter Moratorium
US Senator Bernie Sanders has raised concerns about artificial intelligence, describing it as a pivotal technology and suggesting a halt on new AI datacenters. He highlighted the economic impact on Americans and the financial motives of tech leaders. Additionally, Senator Katie Britt proposed legislation to safeguard minors from harmful AI interactions. Read more
China Updates Aviation Law for Drone Regulation
On December 27, 2025, China passed a revised aviation law to regulate unmanned aircraft, aiming to enhance safety standards in the drone sector. The new rules will be effective from July 1, 2026. Read more
Indonesia Appoints OpenAI as VAT Collector
Indonesia's tax authority has appointed OpenAI OpCo LLC as a value-added tax collector for digital services, aiming to increase revenue from the digital economy. Read more
China Proposes AI Chatbot Regulations
China's cybersecurity regulator has proposed new rules to limit AI chatbots from influencing human emotions in harmful ways, such as leading to suicide or self-harm. The draft regulations also aim to restrict content related to gambling and violence. Read more
US Delays Tariffs on Chinese Semiconductor Imports
The US government has postponed the implementation of tariffs on Chinese semiconductor imports until June 2027, following an investigation into China's chip industry practices. This decision is part of efforts to ease trade tensions with Beijing. Read more
US Grants Samsung and SK Hynix Chip Tool Licenses for China
The U.S. government has approved licenses for Samsung Electronics and SK Hynix to ship chipmaking equipment to China in 2026, following the revocation of previous waivers. Read more
AI Lobbying in Washington Sees Significant Increase
A report by Bloomberg Government reveals a notable rise in AI lobbying activities in Washington, D.C., involving both major firms and startups aiming to shape AI regulations and policies. Read more
China Requires 50% Domestic Equipment for AI Chipmakers
The Chinese government has introduced a rule for chipmakers to use at least 50% domestically produced equipment for new capacity additions, aiming to boost self-sufficiency amid U.S. export restrictions. Read more
CaixaBank Establishes AI Office for Ethical Compliance
CaixaBank has created an Artificial Intelligence Office to ensure AI projects adhere to ethical standards and regulations, as part of its strategic plan. Read more
OpenAI Joins Ireland's Digital Inclusion Charter
OpenAI has signed Ireland's Charter for Digital Inclusion, committing to initiatives that promote digital literacy and equitable access to technology. Read more
TruePath Vision Unveils AI Weapon Detection System
TruePath Vision has launched a new AI system designed to detect weapons in real time using existing camera infrastructure, enhancing safety in various public spaces. Read more
WHO Calls for AI Liability Standards in Healthcare
The World Health Organization is urging countries to set clear liability standards for AI in healthcare to keep pace with the rapid adoption of these technologies. Read more
White House Introduces National AI Regulation Framework
The White House has launched a national framework for AI regulation, signed by President Donald Trump, to preempt state laws and establish a uniform federal standard. This initiative aims to enhance U.S. competitiveness in AI and address regulatory fragmentation. Read more
CISA and Partners Release AI Integration Guidelines for OT
Guidance from CISA and nine partner agencies outlines principles for safely integrating AI into Operational Technology, focusing on risk understanding, selective usage, governance, and safety measures. Read more
Japan to Double AI Safety Institute Staff
The Japanese Government plans to increase the workforce at the AI Safety Institute to improve safety checks for AI technologies, as part of a draft AI strategy. Read more
LG and Seoul National University Open AI Research Center
On December 14, 2025, LG Electronics and Seoul National University announced the launch of the Secured AI Research Center in Seoul, focusing on AI security and data protection. Read more
Boston Consulting Group on AI Agent Risk Management
A publication from Boston Consulting Group addresses the growing risks of autonomous AI agents, noting a 21% rise in incidents over the past year. The report suggests a four-part framework for managing these risks, focusing on real-time monitoring and management shifts. Read more
Türkiye Forms Public AI Directorate for Governance
The Turkish government has restructured its AI governance by establishing a new Public Artificial Intelligence Directorate General under the Cybersecurity Directorate to develop policies and ensure ethical use. Read more
Ontario Requires AI Disclosure in Hiring by 2026
The Ontario government will require employers with over 25 workers to disclose AI use in hiring processes starting January 1, 2026, along with providing salary ranges and timely hiring decisions. Read more
OpenAI Seeks Head of Preparedness for AI Security
OpenAI is hiring a Head of Preparedness to address AI misuse and cybersecurity threats. The role, announced by CEO Sam Altman, offers a $555,000 salary and focuses on risk identification and safeguard development. Read more
California Introduces AI Safety Laws for 2026
California has enacted new laws to regulate artificial intelligence, focusing on child protection, digital privacy, and industry standards, effective January 1, 2026. An executive order from President Donald Trump challenges these regulations with a proposed national AI standard. Read more
China Proposes AI Training Consent Regulations
The Cyberspace Administration of China has proposed new regulations requiring AI platforms to obtain user consent before using chat logs for training, aiming to enhance user privacy and safety. Read more
Ohio Education Department Sets AI Policy Deadline for Schools
The Ohio Department of Education and Workforce has announced a new AI policy model for schools, mandating adoption by 2026 to address privacy, bullying, and academic integrity. Read more
X Corp and Eliza Labs Settle AI Lawsuit
Eliza Labs has settled its lawsuit against X Corp, alleging misuse of proprietary information for AI product development. The case was dismissed with prejudice by a Texas federal judge. Read more
Defence Holdings Partners with Gloucestershire Police for AI in Interviews
Defence Holdings has teamed up with Gloucestershire Police to introduce AI automation in police interviews, aiming to improve evidential workflows. The project will begin with setting performance benchmarks before a wider implementation in 2026. Read more
UltraGreen.ai Gains Approval for AI Health Solutions in Southeast Asia
UltraGreen.ai, a Singapore-based firm, has received regulatory approvals for its Verdye product in the Philippines and the IC-Flow Imaging System V2 in Malaysia, enhancing surgical capabilities in the region. Read more
Rajasthan Unveils AI and ML Policy 2026
The Rajasthan government has launched its AI and ML Policy 2026 to boost the state's contribution to India's AI sector, focusing on innovation, startups, and sector integration. Read more
Mauritania Launches AI for Traffic Violation Detection
The Mauritanian government has introduced an AI system to detect traffic violations in real time, focusing initially on vehicle overloading and seatbelt compliance to improve road safety. Read more

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.