Ahead of election, Facebook tightens policy against deepfakes
Facebook Inc. has shed more light on efforts to eradicate doctored videos known as deepfakes, addressing an issue identified as an emergent threat ahead of the U.S. election.
“We are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes,” Monika Bickert, vice president of global policy management, wrote in a blog post. “While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.”
The operator of the world’s largest social network pledged to remove content that has been “edited or synthesized” beyond adjustments for quality or clarity and is deemed likely to mislead viewers. Facebook emphasized, however, that its new rules will not apply to parody or satire. Facebook said videos that don’t immediately meet its internal criteria for removal might still be fact-checked by more than 50 organizations with which it has partnered worldwide.
The company will also collaborate with Reuters on free online courses to help newsrooms spot deepfakes.
Facebook has experienced the issues that come with manipulated media. Last year, a video of House Speaker Nancy Pelosi, edited to make it look like she was slurring her speech, made the rounds on the social network. The video wasn’t technically a “deepfake”—which would mean it was completely fabricated—but introduced Facebook to the kinds of misinformation it will face heading into the 2020 election. Facebook said it moved too slowly to curtail the reach of the Pelosi video.
Other U.S. internet giants are also tightening on content ahead of the elections. Following criticism that internet companies ran ads from U.S. President Donald Trump that were intentionally misleading, Alphabet Inc.’s Google is restricting misinformation and banning deepfakes in ads.
Facebook’s policy details were first reported by the Washington Post.