Top Chatbots are Parroting Russian Propaganda, Report Finds

Top Chatbots are Parroting Russian Propaganda, Report Finds


“Russian disinformation narratives have infiltrated generative AI.”

Reading Up

Been reading any good Russian propaganda lately? According to a new report from the misinformation watchdog NewsGuard, top chatbots certainly have.

A NewsGuard audit of ten top chatbots — a list including OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, Anthropic’s Claude, and the Perplexity search chatbot — found that the popular AI tools are often parroting false narratives with direct links to a Russian state-tied disinformation network of fake news sites made to look like credible American outlets.

NewsGuard tested the ten chatbots on their knowledge of 19 specific fake narratives pushed by a network of websites recently linked to one John Mark Dougan, a former Floridian sheriff’s deputy currently living under asylum in Moscow (yes, seriously.) As a recent New York Times report revealed, Dougan operates an extensive, largely AI-powered constellation of fake news sites with mundane-sounding titles — among them titles like New York News Daily, The Houston Post, and The Chicago Chronicle, to name a fewwhere he publishes droves of content that promote false narratives.

And now, it seems that Dougan’s fake news has worked its way into popular AI tools. NewsGuard’s audit caught all ten chatbots it tested “convincingly” repeating “fabricated narratives” pushed by Dougan and his state-affiliated fake news network, and importantly, these weren’t one-offs: the AIs parroted talking points in a staggering one-third of total responses examined, in many cases even referencing Dougan’s websites as sources.

“Russian disinformation narratives have infiltrated generative AI,” declares the report, published Tuesday.

Metabolizing Misinformation

Among the demonstrably false claims parroted by the chatbots, according to NewsGuard, include conspiracies regarding Ukrainian President Volodymyr Zelensky’s alleged corruption, as well as the fabricated murder of an Egyptian journalist allegedly plotted by Russian dissident Alexei Navalny’s widow.

NewsGuard tested 570 inputs in total, prompting each chatbot 57 times. Throughout testing, chatbots responded with disinformation in cases when NewsGuard researchers and reporters asked about a given conspiracy — or, in other words, as if someone was using a chatbot as a search engine or research tool — as well as in cases when a bot was specifically asked to write an article about a false, Russia-pushed narrative.

The watchdog group didn’t note which chatbots were better at worse at parsing misinformation. Even so, these errors in AI information-gathering aren’t exactly your standard AI hallucinations, and NewsGuard’s findings represent AI’s concerning new role in the misinformation cycle — and anyone using AI chatbots as a means of interacting with news and information might want to think twice. Or, maybe, just reach for actual news websites for now.

“What’s really alarming is that hoaxes and propaganda these chatbots repeated so frequently were hardly obscure, nor is the person behind them,” NewsGuard co-CEO Steven Brill told Axios.

“For now,” Brill added, “don’t trust answers provided by most of these chatbots to issues related to news, especially controversial issues.”

More on AI misinformation: Even Google’s Own Researchers Admit AI Is Top Source of Misinformation Online



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.