As long as there have been deepfakes, there has been political deepfake panic. Concerns about the use of deepfakes – that is, artificially generated or manipulated material to depict something that never happened – in politics have circulated since at least as early as 2018, when the technology was used to impersonate Obama calling Trump a “complete dipshit.” Articles from countless media outlets over the years have proclaimed that political deepfakes are an imminent threat to public discourse, truth and democracy as we know it.
This week, concerns have resurfaced in Australia after the Liberal National party posted an AI-generated video of the Queensland premier, Steven Miles, on TikTok. The video depicts Miles dancing, and is marked as AI-generated. Even without the label, the tells are evident: the uncanny blurring of features, the unsettling flow of his pants as he moves, the paper he holds mysteriously disappearing and reappearing. Echoing popular political deepfake anxieties of the past seven years, Miles reacted by saying it represents a “turning point for our democracy”.
To date, the existential threat to democracy posed by political deepfakes has been largely unrealised, although their use appears to be on the rise. In January, an AI-generated voice imitating Joe Biden was used in robocalls to try to dissuade people from voting. This week, deepfake videos of Kamala Harris are going viral on TikTok.
Others are attempting to use generative AI to their own advantage. 2020 marked the debut of deepfakes in Indian election campaigns when a minister altered videos of himself to speak in different languages. His team noted that deepfakes allowed them to scale campaign efforts in ways that would otherwise be impossible. This year, a Melbourne Labor councillor has used AI to generate campaign songs in different musical styles and languages which he then runs as paid, targeted advertisements on Facebook.
While concerns from a few years ago seem to have been premature, the media landscape has taken a significant turn since the popular rise and ready availability of generative AI tools. Some anticipate that 2024 – a pivotal election year around the world – may be the year we actually see the electoral impact of deepfakes.
Professor of media studies Mark Andrejevic emphasises that it’s not solely the technology that is cause for concern – it’s the broader political culture we are creating, and the undermining of institutions once relied upon to separate fact from fiction.
“It’s not just a question of whether deepfakes are taken for the truth,” he says, “but how they align with broader trends toward politics as entertainment and the disintegration of shared truths.” So while the fake video of Miles dancing is laughably unsophisticated, focusing on believability alone may risk missing the forest for the trees.
Because all of this is occurring within the context of a degraded information ecosystem, rampant online mis- and disinformation and conspiracy, and algorithmically curated online feeds that determine what we see not according to quality or veracity, but by what will keep us engaged the longest. Deepfake technology contributes to broader information strategies fostered by a hyper-commercialised media environment, which Andrejevic notes is underpinned by business models that “privilege engagement metrics over the goal of informing readers and viewers”.
Looking into the dystopian crystal ball, it isn’t hard to imagine a future in which increasingly sophisticated deepfakes are combined with granular targeted advertising, thanks to decades of surveillance capitalism. Imagine: your own personalised political deepfake campaign, designed specifically to push your particular buttons and curated for your feed based on predictions made from your personal data and online behaviours. In fact, targeted and personalised deepfakes are already beginning to be used for scams, impersonating loved ones and colleagues. For bad faith actors, this is a manipulative political tactic dream, for the rest of us it’s a nightmare.
Personally, I’m not particularly excited about playing synthetic media Sherlock Holmes every time I go online. Is the prime minister’s mouth moving in a reasonably human manner? Does the opposition leader have an extra finger today? Give us a break, we are all very tired!
Then there are the ethical and legal concerns: from questions about copyright, defamation and privacy law, to issues of consent and identity theft. Political deepfakes may erode trust not only through the misleading media itself, but also undermine accountability by creating an environment in which politicians can more readily deride real footage as a forgery.
Even the term “deepfakes” is troubled: it comes from the username of a Redditor who gained notoriety in 2017 by non-consensually imposing the faces of female celebrities on to pornographic material. Indeed, for years experts warned that women, not politicians, are the overwhelming target of malicious use of deepfake tech. So far, this is where the most pronounced harm is being done, and laws attempting to combat or criminalise it that are not carefully considered can risk making things worse.
So while it seems unlikely that deepfakes alone will immediately topple democracy, they are a symptom of and a contributor to broader erosion of public discourse within a frail media landscape and toxic political culture, ultimately leading toward what Andrejevic refers to as “information rot.” I doubt any of us will be hanging up our deerstalker AI detective hats anytime soon.
Source link
lol