The ad industry's plan to regulate the dark side of social media
Since the Facebook ad boycott there has been a push to civilize social media, with renewed focus on brand safety, but there is only so much platforms can do to eliminate all the potentially offensive and misleading material surging through their pages and videos.
The best brands might hope for on social, it seems, is a well-lit place, if not a totally safe space.
An early draft of one proposal from an influential industry group, Global Alliance for Responsible Media, seems to recognize these limitations. The group, known by its acronym GARM, has been circulating a rough cut of new rules it is drafting in the wake of the Facebook boycott.
The proposal attempts to define offensive content, push platforms to prove how prevalent that content is on their services, and give brands transparency about when ads appear next to that content.
“Irrespective of harmful content being posted, every marketer should have the ability to manage the environment they advertise in and the risks,” says one slide in GARM’s proposal, which was obtained by Ad Age and marked “confidential.”
GARM declined to discuss the proposal since it was not finished with the work. Ad Age reviewed six pages of the draft and they show that GARM is working to solidify its plans later this month. The plans cover defining hate speech, grading platforms on their ability to police harmful content, conducting audits to prove that platforms are taking the actions they promise, and giving advertisers more controls.
GARM is just one of the entities looking at how to clean up social media. It was anointed as an industry watchdog in the wake of the Facebook brand boycott, a movement that was dubbed Stop Hate for Profit by the civil rights groups that organized it. In recent weeks, top Madison Avenue firms, including major media holding companies like Omnicom and IPG Mediabrands, a part of Interpublic Group of Companies, have advanced their own plans around brand safety, too.
Omnicom released a proposal last month called the Council on Accountable Social Advertising. Omnicom says it is working with Facebook, Twitter, YouTube, Snapchat, Reddit and TikTok to offer new controls to limit where ads appear.
Omnicom got firm commitments from Snapchat and TikTok to deliver “adjacency” controls, which will let advertisers avoid running ads alongside content they deem harmful. TikTok will roll out its controls in the fourth quarter, while Snapchat promised them in the first quarter of 2021. YouTube already provided such controls since it faced an advertiser rebellion in 2017 over serving ads in videos that featured topics related to terrorism and hate speech.
Meanwhile, Twitter and Facebook, arguably two of the most difficult environments to control, have promised to consider adjacency controls. Facebook’s News Feed and Twitter’s timeline have been uneasy grounds for brands worried about showing up near combative election coverage or combustible social issues.
“We’re giving advertisers control about exactly where their ads deliver and then getting insights after the fact about where their ads did deliver,” says Ben Hovaness, Omnicom Media Group’s managing director of marketplace intelligence and innovation.
IPG Mediabrands has pursued similar goals with an initiative it called Media Responsibility Principles that promotes advertising transparency and urges platforms to police hate speech and harmful posts.
The difficulty of policing social media was made even more evident last week with the violence in Kenosha, Wisconsin, where a shooter was accused of killing three protesters who were in the streets demanding justice for the latest victim of police violence, Jacob Blake. The protest shooter appeared to be affiliated with a militia group that had organized on Facebook, the kind of groups that Facebook had just days prior promised to banish from its service.
Facebook said it has not uncovered links that tie the shooter to militia groups on the social network. “We have not found evidence on Facebook that suggests the shooter followed the Kenosha Guard Page or that he was invited on the Event Page they organized,” a Facebook spokesperson said by e-mail.
Facebook had announced a crackdown on militia and QAnon conspiracy groups after civil rights groups like NAACP and the Anti-Defamation League pressured the social network with the ad boycott.
“Facebook will need to prioritize some form of in-feed brand safety controls as well as solve for Facebook Groups in order to demonstrate it cares about protecting society and advertisers,” says Joshua Lowcock, chief digital officer at UM and global brand safety officer at Mediabrands, which is a part of IPG.
Facebook has put its faith in GARM to be part of the answer to its problems, but it seems its guidance can’t come soon enough. GARM is drafting new definitions in 11 areas that will define hate speech and other forms of offensive content, including sex, violence, guns, drugs, crime and human rights violations. The early draft of its proposal shows GARM also plans to introduce a new category for “disinformation,” which is related to its section on “sensitive social issues.”
GARM can’t make platforms remove all the potential harmful content, but it can prod the platforms to study how much of this content appears on their services and when ads appear near that content.
Facebook has been ahead of other platforms in one key area: It tapped the Media Rating Council to audit parts of its platform as they relate to brand safety. Facebook will let MRC evaluate enforcement of its content and partner monetization policies, which is part of Facebook’s programs that show ads in content produced by third-party publishers. The MRC will evaluate how Facebook enforces brand safety measures in programs like Instant Articles, in-stream video ads and Facebook Audience Network. Facebook also has committed to an audit of the effectiveness of its community enforcement program, and will employ another outside firm to handle that next year.
Facebook has been adamant that it catches 95 percent of offensive content, which covers hate speech, through artificial intelligence before it ever reaches the public. “We’ve invested billions of dollars to keep hate off of our platform, and we have a clear plan of action with the Global Alliance for Responsible Media and the industry to continue this fight,” a Facebook company spokesperson said in an email statement.
GARM is proposing all platforms adopt Media Rating Council audits. GARM also proposes that platforms develop controls within their own ad-serving platforms that give “adjacency” controls, and open them to third-party verification firms like Integral Ad Science, DoubleVerify and Moat.
Ellie Bamford, head of media at R/GA, says that right now most brands are concerned about how the elections are playing out on social media. “The main focus at the moment is the elections and what we can do to protect our clients and our brands from that,” Bamford says. R/GA has been analyzing every social media platform to see where there are brand safety controls.
“We’re really taking a close look at how this all operates and what we consider safe and controllable spaces that exist within all of the platforms,” Bamford says.
Corrections: Facebook says it catches 95 percent of offensive content that breaks its rules through artificial intelligence. An earlier version of the story misstated the number. Facebook also says MRC’s audit will focus on partner and content monetization policies in parts of its platform. An earlier version of this story misstated the scope of the audit.
IPG Mediabrands, which is a part of Interpublic Group of Cos., launched the initiative called Media Responsibility Principles. An earlier version of this story misattributed the author of the program.