
A versatile cutter and paster of prose, ChatGPT is an expedient tool for students and others willing to fob off any old thing as their own rather than inquire and think for themselves. It’s also expedient for Chinese Communist Party propagandists and “influence operators” eager to erect Potemkin villages of fake social media engagement.
But AI-bot blather can be detected, especially by the company that provides the means of producing it. Thus “OpenAI takes down covert operations tied to China and other countries” (OPB, June 5, 2025).
Chinese propagandists are using ChatGPT to write posts and comments on social media sites—and also to create performance reviews detailing that work for their bosses, according to OpenAI researchers…..
In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them. Four of the operations likely originated in China, the company said….
One Chinese operation, which OpenAI dubbed “Sneer Review,” used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook and other websites, in English, Chinese and Urdu. Subjects included the Trump administration’s dismantling of the U.S. Agency for International Development—with posts both praising and criticizing the move—as well as criticism of a Taiwanese game in which players work to defeat the Chinese Communist Party.
In many cases, the operation generated a post as well as comments replying to it, behavior OpenAI’s report said “appeared designed to create a false impression of organic engagement.” The operation used ChatGPT to generate critical comments about the game, and then to write a long-form article claiming the game received widespread backlash.
OpenAI’s February 2025 threat report and its new June threat report elaborate the different ways that agents of governments maliciously use ChatGPT. The firm’s method of thwarting the abuse is simply to shut down the bad guys’ ChatGPT accounts. Often, fictional social media engagement is in an early stage and has not been very effective in attracting real engagement by the time OpenAI detects the activity and closes the accounts.
Also see:
OpenAI: Disrupting malicious uses of AI: June 2025
OpenAI: Disrupting malicious uses of our models: an update: February 2025