IPG Mediabrands has pursued similar goals with an initiative it called Media Responsibility Principles that promotes advertising transparency and urges platforms to police hate speech and harmful posts.
The difficulty of policing social media was made even more evident last week with the violence in Kenosha, Wisconsin, where a shooter was accused of killing three protesters who were in the streets demanding justice for the latest victim of police violence, Jacob Blake. The protest shooter appeared to be affiliated with a militia group that had organized on Facebook, the kind of groups that Facebook had just days prior promised to banish from its service.
Facebook said it has not uncovered links that tie the shooter to militia groups on the social network. “We have not found evidence on Facebook that suggests the shooter followed the Kenosha Guard Page or that he was invited on the Event Page they organized,” a Facebook spokesperson said by e-mail.
Facebook had announced a crackdown on militia and QAnon conspiracy groups after civil rights groups like NAACP and the Anti-Defamation League pressured the social network with the ad boycott.
“Facebook will need to prioritize some form of in-feed brand safety controls as well as solve for Facebook Groups in order to demonstrate it cares about protecting society and advertisers,” says Joshua Lowcock, chief digital officer at UM and global brand safety officer at Mediabrands, which is a part of IPG.
Facebook has put its faith in GARM to be part of the answer to its problems, but it seems its guidance can’t come soon enough. GARM is drafting new definitions in 11 areas that will define hate speech and other forms of offensive content, including sex, violence, guns, drugs, crime and human rights violations. The early draft of its proposal shows GARM also plans to introduce a new category for “disinformation,” which is related to its section on “sensitive social issues.”
GARM can’t make platforms remove all the potential harmful content, but it can prod the platforms to study how much of this content appears on their services and when ads appear near that content.
Facebook has been ahead of other platforms in one key area: It tapped the Media Rating Council to audit parts of its platform as they relate to brand safety. Facebook will let MRC evaluate enforcement of its content and partner monetization policies, which is part of Facebook’s programs that show ads in content produced by third-party publishers. The MRC will evaluate how Facebook enforces brand safety measures in programs like Instant Articles, in-stream video ads and Facebook Audience Network. Facebook also has committed to an audit of the effectiveness of its community enforcement program, and will employ another outside firm to handle that next year.
Facebook has been adamant that it catches 95 percent of offensive content, which covers hate speech, through artificial intelligence before it ever reaches the public. “We’ve invested billions of dollars to keep hate off of our platform, and we have a clear plan of action with the Global Alliance for Responsible Media and the industry to continue this fight,” a Facebook company spokesperson said in an email statement.
GARM is proposing all platforms adopt Media Rating Council audits. GARM also proposes that platforms develop controls within their own ad-serving platforms that give “adjacency” controls, and open them to third-party verification firms like Integral Ad Science, DoubleVerify and Moat.
Ellie Bamford, head of media at R/GA, says that right now most brands are concerned about how the elections are playing out on social media. “The main focus at the moment is the elections and what we can do to protect our clients and our brands from that,” Bamford says. R/GA has been analyzing every social media platform to see where there are brand safety controls.
“We’re really taking a close look at how this all operates and what we consider safe and controllable spaces that exist within all of the platforms,” Bamford says.
Corrections: Facebook says it catches 95 percent of offensive content that breaks its rules through artificial intelligence. An earlier version of the story misstated the number. Facebook also says MRC’s audit will focus on partner and content monetization policies in parts of its platform. An earlier version of this story misstated the scope of the audit.
IPG Mediabrands, which is a part of Interpublic Group of Cos., launched the initiative called Media Responsibility Principles. An earlier version of this story misattributed the author of the program.