YouTube is using AI to help stop the improper use of AI.
Creators in YouTube’s Partner Program — those with 1,000 subscribers with 4,000 valid public watch hours in the last year or 1,000 subscribers with 10 million valid public Shorts views in the last three months — are gaining access to an AI feature that’s intended to stop or slow the spread of deepfakes. The likeness detection tool was originally announced at Made on YouTube in September and is meant to help identify and manage AI-generated content that features someone’s likeness.
As YouTube said in a video posted Tuesday to its Creator Insider channel, it “lets you easily detect, manage, and request the removal of unauthorized videos where your facial likeness may be altered or made with AI—a critical way to safeguard your identity and ensure your audience isn’t misled.”
Creators first have to confirm their identity by uploading a photo ID and short selfie video. Then, they can review videos that have been flagged in the Content Detection tab on YouTube Studio. If they deem a video as AI-generated content, they can request its removal.
“Creators can already request the removal of AI fakes, including face and voice, through our existing privacy process. What this new technology does is scale that protection,” Amjad Hanif, YouTube’s vice president of creator products, told Axios in September.
Today, the tool became available to some creators in the YouTube Partner Program, and it will continue to be rolled out in the coming weeks.
“At YouTube, our goal is to build AI technology that empowers human creativity responsibly, and that includes protecting creators and their businesses,” YouTube said in its video. “We built this tool to help you monitor how your likeness shows up—understanding if other people are generating videos using your facial likeness—to safeguard your identity.”