Meta, the Facebook and Instagram parent company, retooled the system for controlling where ads appear in social media feeds in order to verify for brands that ads are not running next to harmful posts, which has been an ongoing concern of marketers on the internet.
On Tuesday, Meta announced new tests of brand safety controls, overhauling an earlier program launched when the company was still Facebook. In some ways, Meta went back to the drawing board to fix safety controls. The new method of serving ads into contexts that meet a brand’s comfort level provides more transparency, said Samantha Stetson, Meta’s VP of industry relations and client council. In this case, the advertiser will have a better sense of the specific content that surrounds an ad on Facebook and Instagram.
“[Advertisers] wanted true control over the actual adjacency placement,” Stetson said in an interview this week. “They really wanted to control what content was above or below.”
In the past two years, with advertisers seeing alarming political rhetoric and hateful conduct on social media, the need for controls over ads grew. Major brands started pushing harder for Meta, in particular, but Twitter, Reddit, TikTok and YouTube, too, to give them more controls to target ads into places that avoided content that violated their values.
Meta began feeling the pressure, especially after the civil rights uprisings in the summer of 2020 and during the heat of the presidential elections. Brands were more adamant about not having ads appear anywhere near posts that could be construed as supporting hate or even violence. After the Jan. 6 Capitol attacks, Meta began setting firmer timelines for implementing news feed controls. Meta picked Zefr as the third-party measurement firm that could relay to brands that ads appeared in pre-approved settings. Tests of the new program start this quarter, before being offered widely next year.
Related: Meta puts new ads in Facebook Reels
Reporting on the context of ads
Facebook has 1.97 billion daily users, each with personalized feeds that deliver tailored content to fit their interests, so it is a daunting task to target ads in a way that accounts for all the subjects that could appear above or below that content. Stetson explained: Facebook’s algorithms were designed with separate content rankings, one for individual users and another that picks the ads. “You now have to re-engineer the whole back end to make the two systems, to have a content relationship, as well, and take that into consideration,” Stetson said.
Meta’s first solution to control ads, which it started testing last year, had drawbacks. Meta wanted to implement “topic exclusions,” in which brands could pick broad swaths of content they wanted to avoid, including crime, social issues and news. Topic exclusions is a method they have used for brand controls in other parts of the platform including within videos that brands directly sponsor, in places such as Facebook Watch. With topic exclusions in the feed, Meta has been analyzing individual users’ personal feeds for the amount of content that applied to the exclusion categories. If a user was highly likely to be a consumer of those topics, they would basically be flagged as unsuitable for a certain brand, if advertisers wanted to avoid such content. The system fell short in some respects because it did not relay to advertisers exactly where their ads appeared. Advertisers just had assurances that they would not reach the users who had been put in a type of ads penalty box.
Meta views the new controls as an evolution of the topic exclusions it tested last year.
Under the new system, Meta, through third parties including Zefr, will give reporting on the context of all the ads.