An ad industry group has started grading the social media platforms—Facebook, YouTube, Twitter, Snapchat, TikTok and Pinterest—for how well they monitor obscene and hateful content, and a new report shows the services still have plenty of work to do to tackle brand safety issues.
On Tuesday, GARM, a branch of the World Federation of Advertisers, delivered what it called its first “digital brand safety report,” which has been in the works since last year. The report is meant to measure the prevalence of sensitive subjects across platforms related to hate speech, sexual content, violence and other categories that plague the internet and advertisers. The report relies on self-reporting by the platforms to provide transparency and insight into how well they police the services. The report has been viewed as a crucial first step in a digital ad industry that has been wrestling with its own complicity in supporting sites that distribute harmful material to the masses. GARM began working more closely with all the platforms following the political and social upheaval in 2020, and the brand boycott of Facebook, which was prompted by concerns over hate speech and disinformation.
"We recognize marketers need to be able to have a single report to understand the industry’s progress through the lens of a common language and framework,” Carolyn Everson, Facebook’s VP of global business group, said in a statement on Tuesday.
The digital platforms submitted information in key areas like how safe they are for consumers, how safe they are for advertisers, how effective they are at policing the service, and how responsive they are in correcting mistakes. The report shows that there are still gaps in reporting, since not every platform answered every question, and there were some inconsistencies in how each service reported their brand safety metrics.
Google's YouTube, which is strictly video content, revealed the percentage of video views on content that violates its community guidelines. In the fourth quarter of 2020, YouTube estimated that 0.18% of video views fit that category, and that less than 1% of ad impressions came from those types of violating videos. YouTube, however, did not break out what types of videos those were, if they were of a sexual nature, included hate speech, violence, or other topics.
Debbie Weinstein, VP, global solutions at YouTube, said in a statement, “It is our hope that the report helps advertisers more easily assess the progress platforms like YouTube are making in this critical area.”
The report hits on the same day that the World Federation of Advertisers kicks off Global Marketer Week, an industry confab that will hear from some of the same voices that pushed for this brand safety reckoning. On Tuesday, Jonathan Greenblatt, CEO of the Anti-Defamation League, was set to open the conference with a speech calling on the industry to hold media and social media responsible for fueling hate. The ADL joined the NAACP in pushing the Facebook brand boycott last year.
In GARM’s first report, Facebook did break down specifically the prevalence of posts in a number of areas like sexual content, violence and hate speech. Facebook, for instance, says that up to 0.08% of posts in the fourth quarter of 2020 depicted hate speech. Still, Facebook and Instagram were both unable to provide numbers to GARM about how much spam infects the platforms, a key area that Facebook said it was working to rectify. Also, Facebook estimated the prevalence of terrorism organizations on the service, but not hate organizations.
Instagram, meanwhile, did not divulge data on sexual content and nudity, hate speech or spam. Even with those gaps, YouTube and Facebook provided more data than Twitter, Pinterest, Snapchat or TikTok. Both Facebook and YouTube have been eager to prove their platforms are safe for brands. Over the past few years, the companies have faced harsh criticism for allowing offensive content to proliferate. As a result, brands have been in the cross hairs by users for running advertisements alongside such content. It's a problem so widespread that it has created a niche industry of brand safety advocates and led to new methods like article blocklisting, which is one of the ways advertisers avoid subjects and websites that go against their own standards.
Programmatic advertising has exacerbated the problem, sometimes automatically serving up ads alongside content not considered brand safe. “Consumers don’t understand programmatic advertising,” says Lauren Douglass, senior VP of marketing at Channel Factory, a brand safety advocate that specializes in YouTube. “Consumers still have that expectation that when you’re running ads alongside [unsafe content], you’re directly supporting, endorsing and monetizing that content.”