OpenAI Deleted Accounts From Foreign Groups Using AI Models For Disinformation

OpenAI Deleted Accounts From Foreign Groups Using AI Models For Disinformation

Reading time: 2 min

OpenAI has deleted accounts from threat actors from Russia, China, Iran, and Israel which were using its models to manipulate, spread disinformation, and influence political outcomes without disclosing a real identity.

The tech company published a study of its recent investigation as well as its first report of this kind with more details on its policies, how it monitors threat actors, trends and threats in 2024, case studies, and more relevant insights from the investigation.

“Over the last three months, our work against deceptive and abusive actors has included disrupting covert influence operations that sought to use AI models in support of their activity across the Internet,” states the document. “These included campaigns linked to operators in Russia (two networks), China, Iran, and a commercial company in Israel.”

OpenAI highlighted five cases of the main threat groups: Bad Grammar—named by OpenAI— and Doppelganger from Russia, Spamouflage from China, the International Union of Virtual Media (IUVM) from Iran, and an operation nicknamed Zero Zeno for this investigation from Israel.

Most of these threat actors used OpenAI models to translate, create content, spread disinformation in different languages, and address international audiences through social media channels, Telegram, forums, and different blogs and websites.

In the report, OpenAI shares details of how the threat actors used AI tools, including screenshots of the content shared. Bad Grammar, for example, used the AI models to create comments for multiple Telegram channels addressing audiences in the United States, Russia, Ukraine, Moldova, and the Baltic States; while Doppelganger created anti-Ukraine content to post on comments and memes on X and 9GAG addressing users in Europe and North America.

OpenAI has clarified that these actors haven’t had significant results, reached large audiences, or considerably increased engagement by using tools like ChatGPT. The company emphasized how its AI tools also helped its team identify threats and take action.

“Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed,” said OpenAI. “But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”

OpenAI has been working on improving the quality of its content as well as optimizing its AI models to reduce hallucinations—one of the major concerns of users. The company also partnered with News Corp recently to feed their AI models with reputable journalistic content.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...