Facebook's Zuckerberg says AI could lead to looser nudity policies
Facebook could 'free the nipple' if machine learning allows for different standards around the world
Mark Zuckerberg is considering loosening Facebook's strict no-nudity policy if artificial intelligence can be trained to handle au naturel posts.
The Facebook CEO broached the idea in his latest note on enforcing community standards, which he posted online last week. "Over time, these controls may also enable us to have more flexible standards in categories like nudity, where cultural norms are very different around the world and personal preferences vary," Zuckerberg said.
Zuckerberg was referring to strides Facebook is making with machine learning, a branch of artificial intelligence, where algorithms get smarter about identifying content. Facebook uses artificial intelligence to manage the News Feed, the stream of posts personalized for every user.
Zuckerberg picked an odd time to raise the prospect of nudity, considering the whirlwind around Facebook at the moment. The social media behemoth has been reeling from a New York Times article published last week that said Zuckerberg and Chief Operating Officer Sheryl Sandberg mismanaged the aftermath of the 2016 election, trying to cover up rather than clean up the Russian bad actors who overran the platform.
Facebook's strict policies against nudity have not been without criticism; some people have questioned why the social network and Instagram allow men's nipples while banning women's. Images of breastfeeding have caused a stir, and Facebook has sometimes censored art that featured nudity.
If the artificial intelligence is trained well enough, then it could handle nudity with more care than the blanket ban there is now.
Facebook is currently dealing with content that break its policies, whether that's hate speech, violence or nudity. And this statement comes as advertisers are hypersensitive about brand safety and have been preoccupied with the character of the platforms they patronize. In an environment like Facebook, that's nearly impossible to control. A brand could fall between a post from a deranged family member and disturbing news story.
Since 2016, Facebook has been trying to show how it is policing content more aggressively and enforcing community standards. That's why Zuckerberg released the latest report on the subject.
"If you are advertising in a News Feed environment I think trying to mandate or have any sort of separation against content is difficult to manage," says Andrea Ching, CMO of OpenSlate, who works closely with brands on managing brand safety issues.
A Facebook spokeswoman says Zuckerberg was talking about a world where the social network could take into account cultural preferences with regards to nudity, and customize the policies to fit those norms. The technology is still a long way off too, the spokeswoman says.
Facebook has committed to hiring thousands of content moderators.
Still, Facebook should be careful relying on artificial intelligence, says Joshua Lowcock, chief digital officer and brand safety head at UM.
"If there's a lesson for Facebook on brand safety," Lowcock says, "it's that while AI can be useful, it doesn't replace human moderation and the right policy and content controls for advertisers."