
OpenAI Faces GDPR Complaint Over ChatGPT's False Information
OpenAI is facing a new GDPR complaint in Europe due to its AI chatbot, ChatGPT, generating false and defamatory information about individuals. This complaint, supported by privacy rights advocacy group Noyb, involves a Norwegian individual who discovered that ChatGPT falsely claimed he was convicted of murdering his children. Noyb has filed the complaint with the Norwegian data protection authority, arguing that OpenAI's actions violate the GDPR's requirement for data accuracy.
The issue of AI-generated falsehoods, or "hallucinations," has been a recurring problem for ChatGPT. Previous incidents have included false accusations of corruption and child abuse. OpenAI has responded to such issues by displaying disclaimers about potential inaccuracies, but Noyb argues that this is insufficient under GDPR regulations. The complaint highlights the need for AI companies to ensure the accuracy of personal data they process.
Following the incident, OpenAI updated ChatGPT to search the internet for information about individuals, which has reportedly stopped the false claims about the Norwegian complainant. However, concerns remain about the retention of incorrect data within the AI model. Noyb is urging the Norwegian authority to order OpenAI to delete the defamatory output and adjust its model to prevent future inaccuracies.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like AI Policy Brief.
Also, consider following our LinkedIn page AI Safety & Regulation.
More from: Regulation
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates