Facebook’s plan to clean up News Feed for brands involves categorizing up to 1.8 billion daily users for their “propensity” to share political posts, and then to give advertisers the option to avoid targeting those accounts, according to ad executives familiar with the social network’s new brand safety tests.
In recent weeks, Facebook began an “alpha” test to offer “topic exclusions” for a select group of advertisers. In January, the social network promised to develop these controls for the first time.
Facebook has not said what brands would be among the first to experiment, and has shared few details about the technical challenges. Advertising executives, who spoke with Ad Age on the condition of anonymity, say that the early concept is taking shape, and it includes analyzing tens of millions of users’ accounts, and blocking ones that are deemed unsuitable for brands that want to avoid “news and politics.”
“They are creating these propensity models; propensity for harmful or violating content to show up in someone’s News Feed,” says one ad agency exec. “Based on that they determine to monetize that person’s feed or not.”
Facebook declined to comment on specifics, but did acknowledge the tests. “We have begun building and testing Topic Exclusion Controls for News Feed with a small group of advertisers, but it’s still early days,” a spokesman said in an email. “Testing and learning will take place most of the year and determine how we move forward. We’ll share more when we have it.”
The News Feed “topic exclusion” program represents a shift in Facebook’s thinking over the years. The social network had a stance that context is not an important variable in News Feed, meaning the topics of posts that appeared above or below an ad were not relevant to how well that ad performed.
However, Facebook has been criticized for the level of disinformation and hate speech circulating on the service, and brands started demanding ways to ensure sponsored posts do not appear adjacent to objectionable content. The brand safety issue was amplified after large social movements like the one following the police killing of George Floyd in May last year and the Jan. 6 insurrection at the Capitol. Floyd’s death and subsequent civil rights protests even led to a brand boycott of advertising on Facebook for a month.
The News Feed issue also is directly related to other major initiatives stemming from the boycott. Facebook agreed to audits of how well it adheres to brand safety guidelines set by the Global Alliance for Responsible Media, which is part of the World Federation of Advertisers. The group developed criteria for what constitutes subjects like hate speech. Facebook also agreed to audit its Community Standards Enforcement Report.
The report is a quarterly look at how much offensive content—including hate speech, sex and bullying—it removes. Facebook has said it keeps such content to a low murmur, limiting the chances an ad would show up near those posts. Brands want confirmation from an independent party.
Advertisers are getting more details about how “topic exclusions” will work. One advertiser said they resemble controls already in place for video ads on Facebook, where brands can exclude appearing in content related to news, politics, gaming and religion and spirituality. So far, Facebook has only said it was testing “news and politics” exclusions in News Feed, but advertisers say the categories will expand.
One hurdle, according to another advertiser, is that certain accounts could be flagged as undesirable for ads. But people connected as friends to that account might not fall in the same bucket. If the related account sees posts from the friend, then an ad could still show up next to that content, the advertiser said.
Matt Skibinski, general manager of NewsGuard Technologies, says that weeding out unsafe content in News Feed will be a challenge. NewsGuard has experience analyzing thousands of websites to give brands reports on which ones share disinformation and hate speech, and which are credible for advertisers.
Facebook’s plan does parallel some of the developments in the rest of the news industry, where brands began to create blocklists that avoid news altogether, harming the revenue prospects of respected publishers like The New York Times and Wall Street Journal, Skibinski says.
“There are all these different ways that misinformation is thriving on Facebook, still to this day, and so I completely understand why they’re trying to solve the problem,” Skibinski says. “The idea that the way to solve for misinformation and polarizing content for advertisers on Facebook is to block based on the topic of ‘news and politics’ feels to me like a case of using a blunt solution instead of a scalpel.”