Facebook is slowly moving forward with a brand safety test that advertisers say will have drastic implications for the future of content and ad delivery in its most highly trafficked real estate—News Feed.
In recent weeks, Ad Age spoke with multiple ad agency and ad tech partners about the program; if all goes as planned it would give brands unprecedented control over where ads appear in News Feed, which until now has been considered one of the most chaotic environments in digital marketing.
So far, Facebook has only revealed the most basic details of the test, in which participating advertisers will have “topic exclusion tools” to avoid appearing next to subjects like “crime and tragedy."
“They’ll enable advertisers to avoid things like social issues, news and politics, crime and tragedy, and I think there will be a few other categories as well,” says one agency executive, who spoke on condition of anonymity.
There’s also a question of whether a brand could avoid posts in which the main message appears to be in accordance with its tastes and sensibilities, but a single comment within that post broaches a taboo subject. These are the types of sensitive issues that Facebook is still working through, and it could take all year before the full contours of the program take shape.
One part of the equation: Facebook needs to take into account the privacy of users when determining how much it can reveal to brands about context on the social network.
“It will include [user-generated content], which is what the major push from GARM and obviously us as part of our media responsibility index has been focused on, UGC adjacency rather than news content,” says Elijah Harris, senior VP and global head of social at Reprise, an IPG Mediabrands agency. GARM is the Global Alliance for Responsible Media, a group within the World Federation of Advertisers, which has been working with Facebook, Google, Twitter and others on hate speech and brand safety measures that could apply to the whole industry.
Advertising executives, some of whom spoke on condition of anonymity, have been in direct contact with Facebook's top executives like Carolyn Everson, VP of global business solution, to discuss participation in the program and how it will work. They say that if they can truly control where ads appear, accounting for the context of the messages above and below their own sponsored posts, then that will influence how they bid on ads in the auction, the value of certain inventory, and even has ramifications for the Facebook algorithm, which is the secret formula that decides what content goes where in News Feed.
Also, advertising partners say that Facebook’s commitment to this program represents a sea change in its thinking. That after years in which Facebook’s teams insisted that context does not matter for advertising in News Feed, there is now a growing understanding that it absolutely does.
Brands, many of which have been caught up in the firestorm of misinformation circulating online, are pushing for the most robust set of brand safety tools possible, not only looking to avoid links to news stories, but also to avoid offensive user-generated content. UGC makes up a bulk of Facebook posts and is considered the toughest nut to crack if brands will have topic exclusion guards that apply to them too.
Fixing the feed
There are signs that Facebook already is willing to make substantial changes to News Feed, too. In February, Facebook said it would experiment with showing fewer political posts in Canada, Brazil and Indonesia. Facebook's critics have contended that it drives people to political extremes, but Facebook has argued that its algorithm only optimizes for what people want. If that's sensational political views, then that's what they get.
Until now, it has been a lucrative arrangement for Facebook, as people tend to be more engaged when they are served content tailored to their political interests, even if others might find their choice of social media consumption objectionable. Of course, Facebook executives also have maintained that the social network does not benefit from any amount of violent rhetoric or hate speech, and the platform has taken steps to curb the worst abuses it finds. Facebook even banned former President Donald Trump, although some might claim that action came too late.
Facebook made about $85 billion in ad revenue in 2020, and no amount of backlash from advertisers over objectionable content has seemed to hurt its bottom line. Facebook has more than 10 million advertisers. That's why some advertisers were initially surprised that Facebook even agreed to the News Feed test given the enormity of the challenge; Facebook has 1.85 billion daily users.
“If you’re Facebook, you know your algorithm of what goes where and where ads go, that’s your license to print money,” says an ad tech CEO who works with platforms on brand safety challenges. “When you throw context into that mix, it’s going to make it more complicated.”
Facebook has long offered advertisers what are known as “adjacency” controls in other parts of its ad platform like within videos and Facebook Audience Network, the ad network that serves apps that are not owned by Facebook. News Feed is different, however, with billions of posts flying around, much of it generated by the billions of users themselves, and every link, meme, image, video and comment is hard to police.
“It’s a huge engineering effort, right? Because they are talking about making fundamental changes to how the ads delivery system works, and that’s spread across billions of users, all different kinds of devices,” a senior executive at a top ad agency says. “It introduces new technical complexity, because it’s yet another bit of logic that’s fitting into the ad delivery system.”
It could also result in the inflation of ad prices for inventory around content that is typically deemed safe and depress prices on real estate around more taboo topics, the exec says. “We don’t know yet what the impact on processes are going to be or [on] bid density in the ad auction.”
Advertisers say that there was a “tug of war” within Facebook’s product team and the team that works with agencies and brands. The product team has argued that it didn’t matter what content appeared above or below an ad, but recent events made that position untenable, according to the senior agency executive.
“I mean if you think about how far Facebook has come in terms of their position, eight months ago they had this hardline position that adjacency didn’t matter in the News Feed,” the senior exec says. “Today, they say, ‘We’re going to give you control. We admit that it does matter.’”
So, what changed?
Last year, Facebook was the target of an ad boycott organized by civil rights groups, including the NAACP and Anti-Defamation League. The groups called on brands to stop spending on the social network in July to protest hate speech and misinformation. The civil rights groups were concerned about the charged political atmosphere on Facebook, and other platforms, that could, and ultimately did, contribute to violence in the U.S. The boycotters wanted to shine a light on how hate groups were organizing on social media. More than 1,000 brands ultimately joined the Facebook boycott.
Verizon, Bayer, CVS Health, Dunkin Brands, Kimberly Clark Corp., Mars Inc., PepsiCo and other major corporations joined the protest, but it did not dent Facebook’s ad revenue. The company was not keen to bow to outside pressure.
Ad partners and others say that Facebook did bend, though. In June, Reprise’s Harris was one of the first ad agency leaders to publicly rebuke Facebook over how it handled political and offensive speech.
“I think we heard for a while that Facebook does not respond to industry pressure,” Harris says. “But I think the events over the past three to six months show that they actually do.”
How big is the problem?
Facebook has been taking steps for years to improve its monitoring. In 2018, the social network delivered its first Community Standards Enforcement Report, which outlines how many posts it removes for violating its rules around adult content, violence, hate speech, drugs and other hot-button subjects.
That report has been one of Facebook’s go-to defenses for any brand that questioned its commitment to safety. The latest report from February, which covered Facebook and Instagram, shows that Facebook’s automated tools catch 97.1% of what the company considers hate speech before it even gets seen. Facebook also claims the prevalence of hate speech on the platform falls between .07% and .08% of posts, that is 7 or 8 out of every 10,000 posts, a low enough frequency that brands would have little to worry about.
When one starts to add up the amount of posts on Facebook, though, it’s clear how challenging it would also be to make sure that any brand that wants to could control where it appears.
Tatenda Musapatike, a former Facebook staffer who leads ad campaigns for progressive digital nonprofit ACRONYM, says that Facebook clearly has its work cut out for it, but that it also has the ability to implement stronger safety measures.
“They have a beast, and if you’re going to have this really powerful communication method and make money off it, you need to do everything in your power to make it safe,” Musapatike says. “I think it’s clear from all the announcements that they’re making that they have not been actually prioritizing safety. Otherwise this would have been done already.”