Ad Age has learned that some Instagram viewers were subjected to a flood of porn this past weekend, including what appeared to be live sex shows. The company says it was a spam attack that broke through its defenses.
On Sunday night, according to an Instagram viewer, the live section of the app, where users, brands and publishers stream videos from their phones, featured at least seven accounts sharing sexual content. The explicit videos included at least one showing a couple engaged in oral sex.
"We choose not to advertise in live environments ... that's not TV for just that reason," says an exec from a top media agency holding company, who spoke on condition of anonymity.
"We are disappointed and sorry this happened," Beth Gautier, an Instagram spokeswoman, said in an email statement. "We care deeply about the quality of content on Instagram, and take spam, inauthentic and other abusive behavior very seriously. When we catch violating activity, we work to counter and prevent it, including blocking accounts."
Instagram, which has been called out before for the supposed ease with which pornography can be found on the platform, does not show ads in live videos or in the live stream section of the app called Explore, where users can discover new people to follow. However, accounts from brands and publishers do appear in Explore, as do videos from popular Instagram creators who could have sponsors.
Instagram's owner, Facebook, does sell ads inside live videos, called Ad Breaks, which were the first mid-roll commercials on the social network, designed to let video creators show ads while streaming. And Facebook Live has hosted its fair share of disturbing videos, like the man who filmed his Cleveland killing spree in April, or an incident of torture streamed out of Chicago.
But the social network has implemented a number of measures for brands to control where video ads run on the social network, as well as on websites that use Facebook's ad network. For instance, advertisers can create blacklists that prevent their ads from showing alongside specific publishers, and they can choose categories of content to avoid like gambling or politics, among other topics.
"Anyone who is advertising in a social media environment is very conscious of the content and context, where there may be brand suitability issues," says Andrea Ching, CMO of OpenSlate, a social video technology company that focuses on brand safety on YouTube.
Facebook and Instagram are not the only services that have dealt with disturbing videos. On YouTube earlier this year, advertisers discovered ads running with videos featuring extremist and terrorist-inspired content. Also, just last week, videos that sexualized children were uncovered at YouTube.
Meanwhile, Snapchat, which had an early reputation as a sexting app, has also run into issues with brands appearing in close proximity to sexually explicit content. Earlier this year, ads were popping up in video stories from pornographic accounts.
Since then, Snapchat has beefed up vetting, and it even has a way to prevent risky accounts, ones that are likely share nude videos, from accessing filters and lenses, according to an ad agency executive who was familiar with Snapchat's high-tech defense.
"You're never going to find anywhere that's 100 percent safe," says one agency exec. "There just has to be transparency, knowing where an ad winds up running and giving brands the ability to see that."