Experts are sounding the alarm over YouTube’s deepfake detection tool — a new safety feature that could allow Google to train its own AI bots with creators’ faces, according to a report.

The tool gives YouTube users the option to submit a video of their face so the platform can flag uploads that include unauthorized deepfakes of their likeness.

Creators can then request that the AI-generated doppelgangers be taken down.

But the safety policy would also allow Google, which owns YouTube, to train its own AI models using biometric data from creators, CNBC reported Tuesday .

“The data creators provide to sign up for our likeness detection tool is not – and has never been – used to train Google’s generative AI models,” Jack Malon, a spokesperson for YouTube, told The Post.

“This data is

See Full Page