OpenAI revealed in a blog post today that it identified and disrupted five covert influence operations in the last three months that were attempting to manipulate public opinion or influence political outcomes.
The influence campaigns disrupted by OpenAI are claimed to have come from Russia, China, Iran and, surprisingly, Israel, although the Israeli campaign was linked to a company rather than the government. The campaigns spanned multiple tasks, including generating content in various languages, creating fake social media profiles and conducting research.
The key to an influence campaign is to generate content that seeks to manipulate public opinion or political outcomes using various means of communication. But the variety of ways the influence campaigns were using OpenAI tools such as ChatGPT is interesting.
In the first campaign detailed by OpenAI, an operation from Russia dubbed “Bad Grammar,” those behind it used OpenAI models to debug code for running a Telegram bot and ChatGPT to create short, political comments in Russian and English that were subsequently posted to Telegram.
A second Russian operation, dubbed Doppelganger, used OpenAI models to generate comments in English, French, German, Italian and Polish that were posted on X and, bizarrely, the joke site 9GAG. It’s not clear what influence campaign you can undertake on a site that posts funny memes, but seemingly the Russians saw value in it for reasons unknown.
The third campaign, allegedly from China and dubbed Spamoflague, used OpenAI models to research public social media activity and then generate text in Chinese, English, Japanese and Korean. That text was then posted on X, Medium and Blogspot. The campaign also used OpenAI tools to debug code and manage databases and websites.
The fourth campaign, an Iranian operation known as the “International Union of Virtual Media,” used Open AI models to generate and translate long-form articles, headlines and website tags that were then published on a website.
The final campaign was traced to an Israeli company called STOIC, which, according to its website, provides ongoing, real-time monitoring, data analysis and management services for political parties, ministries and municipalities. The company used OpenAI tools to generate articles and comments that were posted on Instagram, Facebook, X and websites associated with the company.
The content posted by these various operations spanned a variety of issues, but unsurprisingly, many were related to geopolitical issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections and politics in Europe and the U.S.
“Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed,” OpenAI noted in the blog post. “But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”
Photo: Focal Foto/Flickr
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU
Source link
lol