Ahead of the 2020 U.S. election and a House Energy and Commerce hearing on manipulated media, Facebook announced that it will strengthen its policy toward misleading videos identified as deepfakes, or those that take a person in an existing image, audio recording, or video and replace them with someone else’s likeness.
While the ban won’t extend to parody or satire or video that’s been edited solely to change the order of words, it’ll affect a swath of edited and synthesized content published for the purpose of hoodwinking viewers.
In a blog post confirming an earlier report from The Washington Post, Facebook global policy management vice president Monika Bickert said that, going forward, Facebook will remove media that’s been modified “beyond adjustments for clarity or quality” in ways that “aren’t apparent to the average person.”
Content generated by machine learning algorithms that merge, replace, or superimpose people will also be subject to deletion, she said.