×

Once registered, you can:

  • - Read additional free articles each month
  • - Comment on articles and featured creative work
  • - Get our curated newsletters delivered to your inbox

By registering you agree to our privacy policy, terms & conditions and to receive occasional emails from Ad Age. You may unsubscribe at any time.

Are you a print subscriber? Activate your account.

What Facebook's nudity and hate report tells marketers about brand safety

By Published on .

Credit: iStock/carlballou

Facebook for the first time is revealing how much nudity, graphic violence and terrorist-inspired posts appear on the social network.

The numbers Facebook revealed in its report on the first quarter Tuesday shows the prevalence of this type of content on Facebook, says Guy Rosen, VP of product management. Facebook applied similar methods to measure nudity and violence as it uses to measure ad visibility for brands—in this case trying to calculate how often "bad" posts, rather than ads, are viewable.

"In the advertising space, an impression is something we're familiar with," Rosen says. "The really important part of this is bad content actually being viewed? Does something actually show up on someone's screen? How often does that happen and what percent of things that show up on a screen are actually things that violate our standards?"

The work Facebook is doing to quantify offensive content could give advertisers a better sense of their risk of appearing near content from which they'd rather stay miles away—and whether it's arguably getting a better handle on the problem or not.

Facebook also is studying whether people connect brands with any potentially harmful posts they see in the News Feed.

"We've run studies indicating that most people don't associate ads in News Feed with adjacent content," Rosen says. "And we're working on more research in this area."

And Facebook is trying to calculate how often it is able to catch offending content before anybody see it. It says it removed 21 million instances of nudity in the first quarter of this year, 96 percent of it before it was ever viewed.

"AI systems help us find bad content faster so we can get to it before anyone reports it or ideally before anyone sees it," Rosen says.

But the machines have an easier time identifying nudity than, for example, hate speech.

The prevalence of nudity was flat quarter over quarter, according to Facebook, but depictions of graphic violence rose. Facebook said 27 out of every 10,000 pieces of content contained such violence in the first quarter of 2018, up from 19 out of 10,000 in the fourth quarter of last year.

Facebook also said it removed 583 million fake accounts in the first quarter, mostly within minutes of their creation.

Facebook hopes to atone for multiple high-profile missteps over the past two years by being more open about how it polices the platform for bad actors. Facebook has had to reckon with its failures during the 2016 presidential election season, when foreign agitators used it to spread fake news. The entire digital ecosystem has had to contend with questions of quality and brand safety, with platforms such as YouTube and Facebook Live repeatedly hosting offensive videos. YouTube in particular was hit with brand boycotts last year because ads appeared on videos with terrorist themes and hate speech.

Twitter, too

On Tuesday, Twitter also took new steps to police its platform for offensive messages. The service is targeting "trolls"—people who harass other users or generally degrade the conversation.

Twitter has been under some of the same scrutiny as Facebook and YouTube for allowing certain offensive speech and behavior. It says it is taking new measures to identify trolls, examining how many accounts people try to create, whether they confirm their email addresses, and indicators like who they mention in tweets. The platform will limit how many people see tweets from those it identifies as trolls.

Like Facebook with its hate speech protocols, Twitter is trying to define what's offensive without stifling open discussion.

"While still a small overall number, these accounts have a disproportionately large, and negative, impact on people's experience on Twitter," the company said in a blog post on Tuesday. "The challenge for us has been: How can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?"

Most Popular