Women's World Cup showed how safety guardrails cost brands valuable audiences
For 30 glorious days each year, brands in every category reach out to LGBTQ+ consumers and say, "We see you (and your wallets)." By July, the media and marketing industries have usually abandoned their rainbow-tinted Pride goggles. This year though, the LGBTQ+ community was gifted a brief, unofficial encore, courtesy of the United States Women’s National Soccer Team.
This unstoppable force—led by audacious, political, out co-captain Megan Rapinoe—captured the attention of a global audience on the way to their fourth consecutive World Cup win and beyond. It was the feel-good story of the year, particularly for LGBTQ women and sports fans, because Rapinoe and five other USWNT players are among the 42 out players in the league.
The media’s unequivocal embrace has rewarded digital publishers with a page-view bump for their sports sections. Deadspin’s viral headline, “Purple-haired lesbian goddess flattens France like a crepe,” garnered 578,200 page views alone. But brands using keyword blocking to avoid terms like “gay,” “lesbian,” “queer,” or “LGBTQ+” are missing out on all those enthusiastic eyeballs.
I’ve been an out gay man for most of my professional life, the past eight as a technology partner to marketers. During that time, the advertising and marketing industry has significantly dialed up positive depictions of the LGBTQ+ community year-round and across categories, so much so that media watchdogs GLAAD told me they have stopped keeping score.
Every June, brands pour millions of dollars into marketing rainbow merchandise, touting partnerships with organizations like The Trevor Project and Human Rights Campaign. A week later, standard keyword blocking practices could be keeping them from reaching one of the biggest news audiences of the year. The juxtaposition is staggering.
Last year, Vice magazine brought to light what it called “the improper use of keyword blocklists” to exclude ads from content around race, religion and LGBTQ+ issues. The research found that keywords “transgender” and “bisexual” were more often blocked than “shooting,” “porn,” or “killing.” Vice said it would no longer block terms that demonetize content for its diverse audience, including “Muslim,” “fat” and “interracial.”
It’s easy to understand why brands might want to double down on safety precautions. In 2017, our first Brand Rx study showed that 90% of marketing executives thought brand safety was a code-red level threat. Ads were popping up next to disinformation campaigns, extremist and hateful content and more. One year later, thanks to a variety of tech solutions, code-red concern among those executives dropped to 46%.
Many of those tactics, however, rely on an all-or-nothing approach to context. Good-faith efforts to back away from unsafe content have unintended consequences. Blacklists, whitelists and keyword blocking threaten to restrict campaign reach and undercut revenue for content and publications that serve LGBTQ+ and other communities that not only deserve coverage but constitute loyal consumers.
This spring, we spoke to a number of marketing executives on the dangers of creating brand safety guardrails that put audiences just beyond advertisers’ reach. Ken van Every, VP of programmatic at Cars.com, said this of journalism generally:
“When you throw out investment from news sites, you’re limiting the workforce they can employ and so you’re really limiting free speech and freedom of the press. It’s a nasty cycle.”
The good news is that we now work in an industry where technology allows us to not only identify “safe” contexts for brands but “suitable,” and even “fit” contexts for brands. According to a study by Edelman Public Relations, 70% of consumers identify themselves as “values-driven.” When a corporation embraces a cause authentically, in a way aligned with their philanthropic and operational values, they can gain the loyalty of passionate communities. Smart marketers and agencies—like the ones who have decided to make their support of LGBTQ+ communities part of their mission—can identify content that aligns with their mission and use artificial intelligence to seek it out.
Technologies like natural language processing and image recognition, when applied together with other safeguards, can help us determine sentiment and mood of a story. Coverage of Megan Rapinoe is a perfect example: Purple-haired lesbian goddess? Yes, please. Squabbling with President Trump? No, thank you. Likewise, content that contains hate speech, anger and contempt can be filtered.
Pride month is long gone and soon the USWNT will wrap up their victory tour, but the brands who want to court a diverse audience and fund the media that supports them can do so year-round by closing tech loopholes that limit their marketing campaigns and by embracing technology that allows a nuanced approach to brand safety.