Integral Ad Science pulls brand-safety product demo after social media backlash

IAS pulled the Context Control demo from its website following criticism.
A new brand-safety product from Integral Ad Science is off to a less than ideal start after the company promoted the tool’s technical chops through an open demo on its website last week.
The product, dubbed Context Control, is critical to IAS’s long term future. Its launch went south after IAS rival Check My Ads accessed the demo—which was available to everyone—and shared screenshots on social media showing it categorized non-explicit homepage language on controversial websites as “neutral.” IAS was swiftly criticized and within 24 hours, removed the demo from its website. But not everyone was in agreement that the tool was functioning incorrectly.
IAS insists its product worked as intended. The demo on its website measured sentiment, IAS says, which is not an endorsement of websites or publishers, but instead serves as one indicator of many when determining whether a piece of content is brand-safe. Its technology weighs all the negative, neutral and positive elements of content by checking what nouns, adjectives, verbs and adverbs are used. Although the majority of marketers wouldn’t classify InfoWars and 4Chan as brand safe, the sentiment on both of their respective homepages—which is what Check My Ads measured—does indeed use “neutral” language, IAS says.
Other industry experts seem to agree. “It might be the case that the emotion used on most pages of these sites’ [homepages] is actually neutral instead of positive or negative in nature from a sentiment perspective,” says Ken Weiner, chief technology officer at GumGum, a contextual advertising company that “reads” the content of any given webpage.
“4Chan seems to be an adult website with sexually charged material, but the language used, at least on the homepage, did seem sentiment-neutral to me," says Weiner.
An IAS spokesman said on Thursday that the tool is now live and in use by IAS customers.
Brand safety isn't easy
Publishers and marketers have historically criticized companies such as IAS for preventing ads from appearing next to content that is obviously brand-safe, due to outdated practices such as keyword blocking. A brand seeking to avoid having its ads appear alongside content about violence, for instance, might have its ads blocked from showing up next to a story about a basketball game—all because the word “shooting” was detected.
But products such as Context Control aim to solve such problems by using contextual indicators—including sentiment—to determine whether a piece of content is brand safe.
"Brand safety will protect you, brand suitability will upgrade your campaign outcomes,” said IAS Chief Marketing Officer Tony Marlow in an email to Ad Age. “The inclusion of sentiment capabilities into the Context Control suite is a suitability-oriented addition which is both complementary and incremental to existing safety measures. Client demand for sentiment capabilities has been high because marketers are intensely focused on their outcomes, now more than ever before.”
Joshua Lowcock, global brand safety officer at UM, believes brand safety is indeed a solvable problem, but says marketers shouldn’t just rely on any single solution.
“It should instead be a multifaceted strategy that looks at the publisher, context and even the creative being used by the advertiser to make sure that, in combination, they are a fit,” says Lowcock. “The industry needs to move away from keyword blocking, but even contextual tools are imperfect. You need to look at the publisher and ask yourself if they’re contributing good or harm to society."
Others, such as IAS competitor Check My Ads, claim measuring something like sentiment is unnecessary. “There is no evidence that appearing on negative-sentiment content leads to any reputational harm or business risk for a brand," says Nandini Jammi, Check My Ads co-founder and CEO. “Measuring sentiment is trying to solve a problem that doesn’t exist.”
The lack of consensus underscores the complexities involved in brand safety. YouTube and Facebook, for instance, have long grappled with keeping content safe for advertisers. Big picture, brand marketers must decide whether they should adopt a single solution that primarily relies on tech, or a blend that involves both humans and technology. The noise from companies offering the solutions, however, doesn't make the decision easy.
“Nobody knows anything,” says a chief marketing officer of a multibillion-dollar company, who asked to remain anonymous to protect industry relationships. “This space is in its infancy.”
“You can’t believe what the company selling the product is saying,” the person added. “There's a lot of tricks these companies can do to make their products look like they're working, and they work—until they don't.”