Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Recent investigations reveal that threat actors linked to both China and Iran have developed innovative methods to manipulate American artificial intelligence models for malicious purposes. These activities raise serious concerns about the potential for covert influence operations, according to a report by OpenAI.
The February report details several disruptions believed to involve individuals or groups operating from China. The investigations highlight attempts to misuse AI models created by leading tech companies, including OpenAI and Meta. One significant case involved the banning of a ChatGPT account that produced disparaging remarks about Chinese dissident Cai Xia. These comments were disseminated via social media accounts purporting to represent users in India and the United States, yet they failed to generate substantial engagement.
Additionally, this same actor leveraged the ChatGPT service to craft long-form Spanish-language articles. These articles, which presented negative depictions of the United States, found their way into mainstream Latin American news outlets. Interestingly, the bylines often referenced both an individual author and, at times, a Chinese company.
During a recent press briefing attended by Fox News Digital, Ben Nimmo, the Principal Investigator with OpenAI’s Intelligence and Investigations team, disclosed that one translation was marked as sponsored content on at least one occasion, indicating a financial backing behind the operation.
OpenAI categorized this as a landmark instance where a Chinese actor managed to influence mainstream media to propagate anti-U.S. narratives towards Latin American audiences. Nimmo articulated the significance of understanding how AI is utilized in these contexts. He stated, “Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles.” Thus, the interconnection between various online activities provides critical insights into how information warfare occurs.
Nimmo highlighted that this situation presents a concerning window into how non-democratic entities might exploit democratic or U.S.-originating AI technologies for their own ends. The information gathered from AI models sheds light on the activities of threat actors, revealing their motives and operations.
The report also details actions taken against a separate group of threat actors associated with Iran. OpenAI identified another ChatGPT account that produced tweets and articles subsequently posted on platforms affiliated with known Iranian influence operations. These operations involve the input and output of data between computers and online platforms, including text, software, video, and audio.
Despite being reported as discrete efforts, the potential overlap between these operations raises questions about possible collusion among Iranian influence operators. The report noted, “The discovery of a potential overlap between these operations—albeit small and isolated—raises a question about whether there is a nexus of cooperation among these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks.” This observation suggests a more coordinated effort than previously understood.
In another alarming discovery, OpenAI reported banning multiple ChatGPT accounts that were implicated in a romance-baiting network, also referred to as “pig butchering.” This nefarious scheme involved generating and translating comments for dissemination across social media platforms like X, Facebook, and Instagram. After these findings were made public, Meta indicated these activities likely originated from a newly established scam operation in Cambodia.
OpenAI has taken proactive steps to counter the growing threats by amplifying its investigative capabilities. The organization remains committed to preventing abuse by adversarial actors and has prioritized its partnerships with U.S. allied governments, industry stakeholders, and other key players. OpenAI’s dedication to understanding the evolving landscape of AI misuse has intensified, especially since they first reported on these emerging threats.
Furthermore, the company believes that through collaboration with upstream and downstream partners—such as hosting and software providers, as well as social media platforms—they can glean important insights regarding threat behaviors. OpenAI emphasizes that their success is augmented by collaborative efforts with other organizations engaged in similar investigations.
OpenAI remains resolute in its mission to identify, prevent, disrupt, and expose any attempts to utilize their models for harmful purposes. They acknowledge that threat actors continuously challenge their defenses, but the organization is determined to stay a step ahead. The ongoing analysis and intervention strategies being put in place highlight the importance of vigilance in the face of evolving threats.
Through extensive research and robust countermeasures, OpenAI aims to safeguard the integrity of American AI models and protect against manipulation by malicious entities, whether domestic or foreign. As the landscape of AI and information warfare continues to evolve, the commitment to transparency and accountability remains paramount.