Most people’s fears about AI are focused on the future. But we’re not paying nearly enough attention to how these technologies are already dramatically increasing cases of sexual abuse in the present.
Take deepfake pornography, a form of image-based sexual abuse in which digitally altered sexual images of victims are created and shared without people’s consent. This is rising dramatically in Britain: Google “deepfake porn” and the top results will not be critical discussions of this abuse, but sites where one can buy and access these abusive images. I’ve been writing about sexual abuse for years, and I’m deeply concerned that we’re not doing enough to stop this.
In recent months, people have shared digitally altered sexual images of the new deputy prime minister Angela Rayner and celebrities including Taylor Swift. But you don’t need to be famous to appear in one of these images or videos – the technology is readily accessible, and can easily be used by ex-partners or strangers to humiliate and degrade. As a tech luddite, I was still under the impression that one needed some digital skills to commit this kind of abuse. Not so. You can simply take someone’s image, put it into a “nudify” app, and the app’s AI will generate a fake nude picture. “It’s quick and easy to create these images, even for anyone with absolutely no technical skills,” Jake Moore, an adviser at a cybersecurity firm, told me.
The impact of this kind of abuse on victims is traumatic and dangerous: first, there is the covert theft of your image; then, the trauma of it being “nudified”; and then the re-traumatisation that occurs when the image is shared online with other people. Victims of this abuse have reported serious mental health consequences. One woman told this newspaper she experienced repeated nightmares and paranoia after she was the target of deepfake images. Another, Rana Ayyub, who has also spoken publicly about being a target, experienced so much harassment as a result of a deepfake pornography image that she had to approach the United Nations for protection.
So how can we stop it, and why aren’t we doing so? The now-toppled Conservative government had planned to introduce a bill to address the alarming proliferation of deepfake pornography by making it a criminal offence, but the bill had serious gaps that would leave victims exposed, and gave perpetrators too much freedom to continue creating these images. In particular, the bill didn’t cover all forms of deepfake pornography – including those that used emojis to cover genitals, for example – and it required proof of motives, such as that the perpetrator intended to use the image for sexual gratification.
This is a problem on several levels: first, it leaves perpetrators open to arguing that they simply created the images “for a laugh” (I’m thinking of Donald Trump’s “locker room talk” comments), or even for “artistic purposes” (God help us). And this brings us to one of the major problems with this type of abuse. In certain circles, it can masquerade as something that is funny or that we should take as “a joke”. This feeds into a certain type of masculine behaviour that has been on the rise in the wake of the #MeToo movement, which attempts to downplay forms of sexual abuse by accusing women of taking “laddish” behaviour too seriously.
Second, in putting the burden on the prosecution to prove the motive of the perpetrator, this sets a very high – perhaps impossible – bar for a criminal prosecution. It’s very difficult to prove what a perpetrator was thinking or feeling when they created deepfake pornographic images. As a result, police forces may be less willing to charge people for these crimes, meaning there will be fewer consequences for perpetrators.
A recent Labour party initiative looked at addressing some of these issues, so I’ll be watching to see if these gaps are filled in any forthcoming legislation. There are a number of things the party could do to clamp down on these crimes – and other things we could be doing now. We could be pursuing civil remedies for deepfake pornography, for instance, which can be a quicker and more effective action than going through the criminal justice system. New rules allowing courts to take down images swiftly could also be a huge help to victims.
But there’s an even bigger issue that we’ll need to tackle: the search engines and social media sites that promote this type of content. Clare McGlynn, a professor at Durham University who studies the legal regulation of pornography and sexual abuse, told me that she had been discussing this problem with a prominent technology company for several months, and the company had still not changed the algorithm to prevent these websites from showing up at the top of the first page. The same is true of social media sites. Both McGlynn and Moore say that they have seen deepfake websites advertised on Instagram, TikTok and X.
This is not just a dark web problem, where illegal or harmful content is hidden away in the sketchiest reaches of the internet. Deepfake pornography is being sold openly on social media. In theory, this should make the problem easier to tackle, because social media sites could simply ban these kinds of adverts. But I don’t have much faith: as a female journalist, I’ve had plenty of abuse on social media, and have never received a response when I’ve complained about this to social media companies.
This is where our regulators should step in. Ofcom could start punishing search engines and social media sites for allowing deepfake adverts. If the government made deepfakes a criminal offence, the regulator would be forced to act. Our new prime minister has already made it clear that his government is all about change. Let’s hope that protecting the victims of sexual abuse and stemming the tide of deepfake pornography is part of this.
-
Lucia Osborne-Crowley is a journalist and author
-
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.
Source link
lol