Facebook issued its latest report on enforcing decency standards on the social network Thursday, and CEO Mark Zuckerberg took the opportunity to criticize calls for the company to be broken up by regulators.
In its "Community Standards Enforcement Report" posted online, Facebook provided metrics on how the platform is performing when it comes to preventing and removing content that runs afoul of rules regarding nudity, exploitation of children and hate speech.
But during a call with reporters, Zuckerberg was asked about recent suggestions from prominent politicians, like Senator Elizabeth Warren, that U.S. regulators break up Facebook and its ownership of Instagram and WhatsApp because of its control of such a large share of people’s internet time and data.
The CEO pointed to the investments Facebook is making to clean up the platform and ultimately combat larger societal problems. Zuckerberg said that Facebook is spending at a “massive level,” which would be harder to sustain for any smaller company—even mentioning Twitter by name.
“The amount of our budget that goes toward our safety systems is greater than Twitter’s whole revenue this year,” Zuckerberg said. “We’re able to do things that I think are just not possible for other folks to do.”
Twitter revenue topped $3 billion in 2018. Zuckerberg also argued that the prevalence of social media companies including Twitter, Snapchat and Tik Tok indicates that the industry is sufficiently competitive.
Facebook, like Twitter and YouTube, has been inundated with fake accounts and malicious actors who spread disinformation and engage in other harmful activity, the kind that was on full display during the 2016 U.S. presidential election.
Here is what we learned from its enforcement report, which provided new metrics and outlined how much of the illicit activity was identified and removed:
Facebook has a policy of no nudity or X-rated posts, and the company said that in the first quarter of 2019, 12 to 14 of every 10,000 post views contained some form of sexually explicit content. That was up from eight to 10 posts out of every 10,000 in the prior quarter, and the company explained the uptick, saying it had previously focused less on nudity while tackling more pressing concerns.
“The prevalence of adult nudity and sexual activity violations on Facebook fluctuated during the last six months,” the report says. “It declined in Q4 2018 before rising again in Q1 2019. During the same time period, content [reviewed] for adult nudity decreased. The reason for this is that we prioritized our efforts in other, more harmful, content areas.”
For the first time, Facebook broke out sexually exploitative posts involving children. Facebook estimated that three out of every 10,000 posts viewed in the first quarter of 2019 involved child nudity. Facebook also said that it acted on fewer posts of this nature than it had in previous quarters, and that was partly blamed on a technological glitch resulting in the inability to detect copycat posts featuring the same material.
One of the toughest categories to identify, especially for artificial intelligence, which struggles with context, hate speech has also become one of the most pressing because of the spread of propaganda online that can incite readers to violence.
Facebook claims its AI is improving, and it flagged 65 percent of the hate speech in the first quarter, compared to catching 24 percent of the instances of hate speech in the first quarter 2018.
The human element
“AI is not a silver bullet,” said Guy Rosen, VP of product at Facebook. “We need a combination of technology and people.”
To that end, some of the biggest investments from Facebook are going toward staff—the company has hired 3,000 people in the past two years—to keep watch over the platform.
But those reviewers, often repeatedly subjected to the worst images and videos online, have endured trauma. Facebook recently raised the pay from about $15 to up to $22 depending on the region the staff works. It also requires a mental health specialist to be on call for all shifts.