YouTube Now Lets You Request the Removal of AI Content That Impersonates You

Knowledge Series #39 - How do development environments work?


Good.

Privacy Police

Generative AI’s potential to allow bad actors to effortlessly impersonate you is the stuff of nightmares. To combat this, YouTube, the world’s largest video platform, is now giving people the ability to request the removal of AI-generated content that imitates their appearance or voice, expanding on its currently light guardrails for the technology.

This change was quietly added in an update to YouTube’s Privacy Guidelines last month, but wasn’t reported until TechCrunch noticed it this week. YouTube considers cases where an AI is used “to alter or create synthetic content that looks or sounds like you” as a potential privacy violation, rather than as an issue of misinformation or copyright.

Submitting a request is not a guarantee of removal, however, and YouTube’s stated criteria leaves room for considerable ambiguity. Some of the listed factors YouTube says it will consider include whether the content is disclosed as “altered or synthetic,” whether the person “can be uniquely identified,” and whether the content is “realistic.”

But here comes a huge and familiar loophole: whether the content can be considered parody or satire, or even more vaguely, to contain some value to “public interest” will also be considered — nebulous qualifications that show that YouTube is taking a fairly soft stance here that is by no means anti-AI.

Letter of the Law

In keeping with its standards regarding any form of a privacy violation, YouTube says that it will only hear out first-party claims. Only in exceptional cases like the impersonated individual not having internet, being a minor, or being deceased will third-party claims be considered.

If the claim goes through, YouTube will give the offending uploader 48 hours to act on the complaint, which can involve trimming or blurring the video to remove the problematic content, or deleting the video entirely. If the uploader fails to act in time, their video will be subject to further review by the YouTube team.

“If we remove your video for a privacy violation, do not upload another version featuring the same people,” YouTube’s guidelines read. “We’re serious about protecting our users and suspend accounts that violate people’s privacy.”

These guidelines are all well and good, but the real question is how YouTube enforces them in practice. The Google-owned platform, as TechCrunch notes, has its own stakes in AI, including the release of a music generation tool and a bot that summarizes comments under short videos — to say nothing of Google’s far greater role in the AI race at large.

That could be why this new ability to request the removal of AI content has debuted quietly, as a tepid continuation of YouTube’s “responsible” AI initiative it began last year that’s coming into effect now. It officially started requiring realistic AI-generated content to be disclosed in March.

All that being said, we suspect that YouTube won’t be as trigger-happy with taking down problematic AI-generated content as it is with enforcing copyright strikes. But it’s a slightly heartening gesture at least, and a step in the right direction.

More on AI: Facebook Lunatics Are Making AI-Generated Pictures of Cops Carrying Huge Bibles Through Floods Go Viral



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.