Welcome to this week's edition of the AI Policy Brief, where we delve into the latest developments in AI regulations, safety standards, and government policies from around the globe. This week, the House of Lords is debating an amendment related to AI copyright disclosure, while Canada has appointed its first federal AI minister, marking a significant step in its AI governance. Meanwhile, the US Commerce Department has set new rules for AI semiconductor exports, reflecting growing concerns over technology transfer and national security.
On the international front, Oman has issued a new AI policy focusing on safety and ethics, and the UN is holding a meeting to discuss regulations on AI weapons. In Europe, member states are grappling with funding challenges for enforcing the AI Act, highlighting the financial hurdles in implementing comprehensive AI regulations. Stay tuned as we explore these stories and more in detail.
National Policy
The House of Lords is reviewing a proposed amendment to the data bill requiring AI companies to disclose their use of copyrighted content. A bipartisan group of U.S. senators has introduced a bill for the Department of Commerce to lead an AI education campaign. Evan Solomon has been appointed as Canada's first federal AI minister by Prime Minister Mark Carney.
The US Department of Commerce has set new rules for AI semiconductor exports to improve compliance, while Oman has issued a policy for AI safety and ethics. The United Nations is hosting discussions on AI weapons regulation, and EU member states face challenges in enforcing the AI Act. David Sacks commented on managing AI chip export risks, and Malta increased tax revenue by €650 million using AI.
SoundCloud has updated its terms of use to address AI training concerns, while Apple's AI partnership with Alibaba is under U.S. review for data sharing issues. Meanwhile, Utah has revised its AI laws to enhance consumer protection, and New York has banned AI deepfakes of minors, requiring disclaimers for AI chatbots.
Grok AI chatbot on X has been giving unrelated responses, while MIT disavows a student's AI productivity paper over data concerns. OpenAI CEO envisions ChatGPT as a life memory tool, and xAI addresses Grok's controversial responses linked to unauthorized prompt changes. Enkrypt AI reports on Mistral's multimodal AI risks, and xAI misses its AI safety report deadline.
The Nanterre Court of Justice has mandated that companies consult their works council prior to AI deployment, halting a project and fining for non-compliance. NOYB is preparing to file an injunction against Meta Platforms Inc over the use of European user data for AI training.