A smattering of terrorist propaganda popping up on Instagram is highlighting one of the thorniest dilemmas facing brands on social media: How can advertisers ever be 100 percent safe from appearing near objectionable material?
In recent weeks, an independent monitor has uncovered Islamic State videos on Instagram, and then identified sponsored posts from top brands appearing above and below the harmful videos. Instagram, which is owned by Facebook, was alerted to the terrorist activity, and an investigation led to the removal of at least 28 accounts.
Eric Feinberg, the researcher who uncovered the activity, has been working to identify bad actors on platforms like Facebook and YouTube for years, with particular focus on extremists. In this case, Feinberg, through his cyber-security organization GIPEC—or Global Intellectual Property Enforcement Center—looked for videos on Instagram following last month’s death of Abu Bakr al-Baghdadi, the former leader of the Islamic State in Iraq and Syria.
The death of the terrorist leader became a topic of discussion for extremist elements online, including on apps like TikTok, the Chinese-based musical video app. The terrorist-inspired videos produced on TikTok were appearing on Instagram. Feinberg then followed accounts that created the terrorism-related posts, and Instagram began feeding ads amid the extremism coursing through his feed. The sponsored Instagram posts were from brands like Hershey’s, Pepsi, Nordstrom and Marriott, among many more advertisers. The brands did not return requests for comment.
When alerted to the activity, a Facebook spokeswoman said that the company’s own safety software had independently flagged some of the accounts. Then a deeper investigation uncovered 28 more accounts that were removed for violating policies, the spokeswoman said by e-mail.
Instagram took issue with Feinberg’s methods, saying they offer a slanted view of the service. “This research is a false representation of what people see on Instagram, since GIPEC created an artificial feed that only followed bad actors for the sake of this project,” said Karina Newton, public policy director at Instagram, in an e-mail statement. “In reality, we know that fewer than four in 10,000 views on Instagram contain terrorist content. But even a single piece of bad content on Instagram is too many, and that’s why we constantly use teams and technologies to get better and keep our community safe."
Still, those four views in every 10,000 are a concern to advertisers, according to ad agency executives contacted for this story. “I think there is much, much more on the platforms than they’re letting on,” says one top executive from a major advertising firm, speaking on condition of anonymity, because of the sensitive nature of the subject.
The problems of "brand safety" have been one of the industry's biggest obsessions. Since advertisers found their ads running on extremist videos on YouTube in 2017, they have been pressing platforms to account for how their digital dollars could support bad actors. YouTube, owned by Google, shares ad revenue with accounts that post videos, and if some of those accounts promote extremism and other objectionable material, then brands are seen as indirectly funding them. YouTube has since overhauled its creator program, monitoring the accounts more closely and opening up to more independent industry oversight.
Also, brands have pushed for more control over how ads get served to websites throughout the internet using programmatic ad technology. Ads can appear on offensive websites, or near material that doesn't align with a brand's values, which became a pressing concern in the 2016 U.S. election season. Advertisers began paying attention to the proliferation of shady sites promoting fake news, supported by the automated online ad ecosystem run by companies like Facebook and Google.
The latest Instagram flare up, however, is a different type of concern. It’s about “adjacency,” which is when an ad appears surrounding social media posts that brands have no ability to control. It’s an intractable problem, because social media feeds are tailored to the individual consumer, who is often following accounts that won’t align with a brand.
“There is a lot of conversation around the industry about ‘in-feed’ advertising,” says Joshua Lowcock, chief digital officer of UM Worldwide. “But there is a difference between monetizing that content and advertising within, say, Facebook News Feed.”
Still, the industry is clearly applying pressure to all platforms to clean up. This summer at the Cannes Lions International Festival of Creativity, 16 of the world’s top advertisers helped form the Global Alliance for Responsible Media, a group that has been lobbying for more power to police platforms, which are viewed as secretive "walled gardens." “It’s about being a responsible advertiser,” Lowcock says, “where you’re investing money so you’re driving positive outcomes for society.”
Just this week, Facebook showed how it is adjusting its ad offerings to placate the ad community over “brand safety” concerns. Carolyn Everson, Facebook’s VP of global marketing solutions, rolled out a series of updates designed to give brands better control of where ads appear, and more third-party measurement to give an independent view into the platform. Third-party measurement firms help track where ads run, detecting if the messages popped up on undesirable websites or videos.
Facebook also recently released its latest “transparency report,” which comes every six months and discloses how often the platform responded to harmful content, and how prevalent that content is. The report delves into extremist accounts, child exploitation and fake accounts, among other subjects. The most recent report included Instagram for the first time. “In [the third quarter] on Instagram, we removed 133,300 pieces of Terrorist Propaganda content, of which 92.2 percent we detected proactively,” the report said.
The U.S. government also is paying attention to how the platforms handle terrorist content and other illicit activity. For years, the platforms have been protected by the 1996 Communications Decency Act, which gives immunity to internet services. That means they are not responsible for the activity and content from the people that use the services. There have been growing calls from within Congress to withdraw the immunity, making platforms more accountable for their content.
On Thursday, freshman U.S. Representative Max Rose put forward a bill that would rate social media companies on their ability to identify and remove terrorist content.
A group that tracks online extremism, Ghost Security Group, says that Instagram is a relatively quiet corner of the internet when it comes to terrorist propaganda. “In our experience, Instagram and TikTok are not the places to be concerned with,” said a spokesman for the group in an email. “Instagram is insanely good at taking down extremist accounts and by the time we see a link and go to check it, it's already been suspended.”
Even the most sophisticated software and artificial intelligence will have a hard time purging terrorists completely from the platform. Feinberg, the social media watchdog, says that the “bad stuff” finds people all too easily. Once a person shows any interest in a type of content, the algorithm that personalized people’s feeds starts serving them more of the same.
“I am not searching bad stuff, but due to the algorithms bad stuff is searching and being supplied to me and being supplied to other users,” Feinberg said.