Breaking: Chinese Official Accidentally Reveals Vast Influence Operation Through ChatGPT Use
|
google-site-verification=VGG-4uppFMIH5Z158y2SPtfqc0DazM19-P6kYYaW9wQ
A Chinese law enforcement official disclosed to ChatGPT a vast influence operation targeting foreign adversaries and dissidents worldwide that included the impersonation of U.S. officials to silence critics, according to a new report from Open AI, the company behind the chatbot.
The Chinese operative used Open AI’s ChatGPT to help plan an operation to undermine Japan’s pro-Taiwan prime minister as part of the broader operation against potential threats to the Chinese Communist Party. The individual used ChatGPT to edit and update reports on “cyber special operations,” accidentally revealing the sweeping cyber campaign against foreign enemies and domestic threats.
The periodic reports on “cyber special operations” laid out a “large-scale, resource-intensive and sustained” strategy featuring numerous operations and hundreds of staff. The operations deployed tactics such as the abuse of social media account reporting to target dissidents, mass online posting, forging documents, and impersonating U.S. officials. All of the campaigns were designed to suppress online and offline criticism inside China and abroad. Based on the Chinese user’s status reports, Open AI concluded that China ultimately conducted the Japan operation without using ChatGPT.
Open AI said it banned the account linked to the Chinese user and conducted open-source research to identify online activities that lined up with the user’s descriptions. The company revealed the Chinese influence campaign in a broader report laying out foreign operations and scams involving ChatGPT. When the Chinese user sought ChatGPT’s advice, the AI model refused to provide any and the user paused their inputs.
Japanese Prime Minister Sanae Takaichi recently won a landslide electoral victory after calling a snap election only months into her tenure. Takaichi’s dominant victory followed Beijing’s furious reaction to Takaichi’s suggestion in November that Japan would help defend Taiwan from an invasion. The Chinese attempt at discrediting her appears to have failed given Takaichi’s electoral success. A charismatic female leader likened to former U.K. Prime Minister Margaret Thatcher, Takaichi first became the target of the Chinese operation after she criticized the state of human rights of Inner Mongolia.
The plan to discredit Takaichi had six elements, as the Chinese user described it to ChatGPT. Posting and amplifying negative comments about her was the first element. Secondly, the user proposed criticizing her stance on immigration, potentially through emails to Japanese politicians from accounts pretending to be foreign residents. The user floated similar strategies for attacking Takaichi on cost of living, U.S. tariffs, and her right-wing political leanings. The last element of the plan consisted of sharing positive statements about human rights in Inner Mongolia.
Open AI found open-source evidence of online activities that strongly resembled the proposed anti-Takaichi operation, indicating that it went forward after ChatGPT declined to assist. The company carried out similar research on other campaigns the Chinese user described that were designed to harass dissidents online.
While using ChatGPT, the Chinese account referenced over 100 tactics for conducting targeted harassment campaigns against critics and dissidents. Individual tactics were sorted into broader categories including the manipulation of narratives, exploitation of platforms, attacks on the legitimacy of critics, and the exertion of social pressure. Beyond the human level, the Chinese user’s inputs to ChatGPT revealed the ways locally-sourced AI models like Chinese model DeepSeek are equipped to bolster the targeted intimidation campaigns.
The tactics Open AI described are not new information for U.S. federal law enforcement. The U.S. government has waged federal prosecutions against groups of Chinese hackers and sought to crack down Chinese espionage across American institutions. Last year, Open AI’s main rival Anthropic released a report showing Chinese government-backed cyber warriors attempted to manipulate the company’s chatbot Claude into carrying out espionage. The Anthropic report and Open AI’s latest disclosure reflect the growing importance of AI cyberdefense in protecting U.S. national security from foreign adversaries and other malign actors.
|
Commentaires
Enregistrer un commentaire
Thank you to leave a comment on my site