Facebook is giving the ad world new assurances about the integrity of the platform and how it moderates potentially controversial content, following weeks of scrutiny that increased pressure on the social network to levels not seen since an advertiser boycott last year.
The social network's leadership has been communicating with Madison Avenue to explain the company's side of the story, following damaging Wall Street Journal reports about its alleged prolicies, and subsequent government action against Facebook. The reports raised questions about the sincerity of commitments made by Facebook to the ad world about the safety of its platform. Facebook has mounted a spirited rebuttal, saying that points raised by the reports, which the Journal called "The Facebook Files," do not tell the whole story.
In a recent note to its clients, obtained by Ad Age, Omnicom Media Group said it has been in close communication with Facebook to understand the thorny issues involved, especially regarding the basic safety of the social network. In its response to Omnicom, Facebook explained how it is handling the subjects that have been central to marketers' concerns with the platform. The Journal identified gaps in how the social network vets content that breaks its policies and about Instagram’s potentially negative health effects on teenagers. Brands have repeatedly called out the threats posed by just those types of topics, obviously not wanting to support the proliferation of toxic Facebook material or be seen as aiding in harming teenagers.
In particular, one question vexed Omnicom: Whether Facebook's XCheck program, mentioned in the Journal report, affected the social network's self-reporting on how much offensive content permeates News Feed. The XCheck program allowed privileged popular accounts to break its rules and remain active, and it could have affected how Facebook discloses the "prevalence of violating content," which gets reported in Facebook's quarterly Community Standards Enforcement Report (CSER).
Facebook "flatly said no," according to Ben Hovaness, senior VP of marketplace intelligence at Omnicom Media Group, in an interview with Ad Age. “There was no possibility of an interaction between the CSER and XCheck," Hovaness said. "They were very direct on that point.”
Omincom took Facebook at its word that it can trust the community safety reporting, but the agency also emphasized the need for independent verification.
"Our quarterly Community Standards Enforcement Report leads the industry in showing what we’re enforcing against and how effective we’re being,” a Facebook spokesman said in a statement. “The report accurately reflects our enforcement, and cross check does not impact our prevalence estimates. Cross check is designed to increase our accuracy, and that's our goal as we actively make changes to improve it.”
The prevalence of offensive material on Facebook has become a crucial metric for brands worried about safety in settings like Facebook News Feed, where there have historically been few controls for brands to manage the context in which their ads appear (whether the ads show up above or below news posts or disinformation that break brands' corporate sensibilities.) In a limited test, Facebook has recently given brands controls to avoid appearing above or below posts in News Feed that could embarrass a brand that was dropped in the middle.
Advertisers are trying to distinguish between what are feasible policies Facebook could implement to sensibly address brand and community safety, while recognizing there are technical limitations, Hovaness said.
Omnicom's note said that the existence of the XCheck program was further evidence that Facebook needs to allow independent third parties to audit its brand safety enforcement policies. “Advertisers need to know that they aren’t reliant on platforms’ self-evaluation to understand the environments they are buying media in,” the note said.
Omnicom’s note discussed issues such as the need for “brand safety” controls so advertisers don’t pop up next to negative material. “The need for advertiser adjacency controls was highlighted by the story about teen girls being served content they don’t want [on Instagram],” Omnicom’s note said. “After all, why should advertisers be showing up next to organic content that users don’t even want to see?”
Last week, IPG Mediabrands also sent its own note to clients, outlining the dangers of disinformation on social media, including Facebook, YouTube and Twitter.
“What clients need, in the wake of any news about platform practices, is help breaking down the headlines to gain a deeper understanding of the core issues,” Hovaness said. “They want to understand how their business is being impacted and how the partner is being held accountable.”
Subscribe to Ad Age now for the latest industry news and analysis.
The Journal reports and Congressional hearings slated for this week have refocused a harsh spotlight on Facebook. Antigone Davis, Facebook’s head of global safety, will testify before the Senate about the claims that the company was not open about its own research that found Instagram could lead to self-esteem issues in teenage girls. Facebook’s leaders have mounted a vigorous defense of the company stating that the Journal mischaracterized its programs and research, and that the company’s intent was being misconstrued.
Over the weekend, Facebook rebutted the claims about its research on teen health. “The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced,” Pratiti Raychoudhury, Facebook’s VP and head of research, said in a blog post.
Even still, on Monday, Facebook did halt its development of the Instagram Kids app, which promised to be a sensitive subject at the Senate hearing on Thursday.