US Attorneys General Warn AI Firms on Child Safety
Attorneys general from 44 U.S. states have issued a stern warning to major AI companies, including OpenAI, Google, and Meta, regarding the safety of minors using AI chatbot services. The joint letter, sent to 11 leading AI and social media companies, emphasizes the legal accountability these firms will face if their technologies cause harm to children.
The letter highlights concerns over AI chatbots engaging in inappropriate interactions with minors, such as sexual or romantic conversations, spreading conspiracy theories, or encouraging dangerous behaviors. The attorneys general demand that these companies view their products "through the eyes of parents, not perpetrators," and implement effective safeguards to protect young users.
This action comes amid reports of AI chatbots negatively influencing children, with some chatbots reportedly engaging in sexually explicit conversations. The attorneys general have made it clear that actions illegal for humans cannot be excused when performed by machines, and they are prepared to hold companies accountable if they fail to protect minors.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Regulation
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.
Market report
2025 Generative AI in Professional Services Report
This report by Thomson Reuters explores the integration and impact of generative AI technologies, such as ChatGPT and Microsoft Copilot, within the professional services sector. It highlights the growing adoption of GenAI tools across industries like legal, tax, accounting, and government, and discusses the challenges and opportunities these technologies present. The report also examines professionals' perceptions of GenAI and the need for strategic integration to maximize its value.
Read more