OpenAI Bans Accounts Misusing ChatGPT for Surveillance

OpenAI Bans Accounts Misusing ChatGPT for Surveillance

OpenAI has banned accounts from China and North Korea for using ChatGPT in surveillance and influence operations, according to Reuters.

OpenAI has banned several accounts from China and North Korea for allegedly misusing its ChatGPT platform for malicious activities, including surveillance and influence operations, according to Reuters. The accounts were reportedly involved in creating AI-powered tools to monitor social media for anti-China protests and generate negative content about the United States.

The banned accounts used ChatGPT to develop and debug code for a surveillance tool known as "Qianyue Overseas Public Opinion AI Assistant," which was designed to collect real-time data on protests in Western countries. This tool reportedly sent surveillance reports to Chinese authorities and intelligence agents.

OpenAI's actions highlight concerns about how authoritarian regimes might exploit AI technologies developed in democratic countries. The company has not disclosed the number of accounts banned or the specific timeframe of these actions. However, it emphasized that using its AI for unauthorized monitoring or surveillance is against its policies.

In addition to the surveillance activities, OpenAI identified other malicious uses of its technology, including generating fake resumes and online profiles for employment scams linked to North Korea, and creating content for influence campaigns in Latin America and other regions.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like AI Policy Brief.

Also, consider following our LinkedIn page AI Safety & Regulation.

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates