The sites operate more responsibly than some, for example, by not allowing dark web uploads using the anonymous Tor browser, said Jacques Marcoux, director of research and analytics for C3P. But he said many of the thousands of sites distributing CSAM likely wouldn’t exist without ad support.
Whether such images are illegal isn’t always clear. Many appear to be AI-generated or manipulated, according to C3P, which said some of the images transmitted by Adalytics have been circulating on the internet since as early as 2018. AI-generated CSAM may not be illegal under federal law, though it clearly is in some jurisdictions, such as California. Definitions can vary, which was a factor in C3P identifying only 35 among more than 600 of the images from links sent by Adalytics as CSAM.
Advertiser outrage
Regardless of the technicalities, advertiser and agency executives contacted by Ad Age, who had seen a draft of the Adalytics report in advance, were outraged.
“If something like this can get through a verification provider, we should question the entire validity of the provider to protect a brand,” said Jay Friedman, CEO of the agency Goodway Group.
One advertiser who spoke on background said their company is generally looking to fix things with ad tech players in programmatic, “not a gotcha” or a chance to collect refunds. “But when I saw this, my reaction was very much different,” the executive said. “There’s someone, somewhere, like an actual person, who’s responsible for making sure this doesn’t happen. I got really upset.”
“What pisses me off the most is that no one is incentivized to care about this,” said another marketer executive. “None of the technology platforms we’ve used flagged it. It took someone doing open source work [Adalytics] to flag it. If we cared more about what we’re doing and put a little bit more due diligence in this, we wouldn’t be monetizing these kinds of sites.”
The research makes brand safety vendors and DSPs look like “clowns,” the executive said. “What we need to focus on is why are we enabling clowns?”
Not every DSP was involved in monetizing the sites. The Trade Desk stands out as the largest DSP that showed no evidence of having served ads on imgbb.com or ibb.co since 2021 in the Adalytics research. Basis Technologies and the buy-side operation of Publicis Groupe’s Epsilon also didn’t. Nor did DSPs for Walmart Connect, Target’s Roundel or Dollar General. Kargo and TrustX were noted by Adalytics as two supply-side platforms that hadn’t transacted on the sites.
DSPs and DV respond
Amazon in a statement said: “We regret that this occurred and have swiftly taken action to block these websites from showing our ads. We have strict policies in place against serving ads on content of this nature, and we are taking additional steps to help ensure this does not happen in the future. We’ve received the senators’ letter and are working on our response.”
Amazon last month appeared to be the DSP serving the most ads on a sample of pages with adult pornography at the sites as observed by Ad Age. As of Feb. 6, after Ad Age reached out to an advertiser whose ads had appeared with adult porn on ibb.co, Amazon’s DSP wasn’t observed serving any ads there. Other advertisers interviewed, who spoke on the condition of anonymity because they weren’t authorized by their companies to speak to the press, also said Amazon was the DSP responsible for most of their brands’ ads being served on the sites.
An agency executive who asked to remain anonymous shared results of an audit for a client last year that indicated Amazon Publisher Services was the leading supply-side source of inventory from imgbb.com and ibb.co, accounting for around two-thirds of placements, albeit in a small sample with under $15 in spending.
The observations of advertisers on their log files, the agency audit and Ad Age’s observations all could have been skewed by the nature of the businesses involved, time of day or browsing history, so they don’t amount to a quantitative study of which DSP or supply source was responsible for the most ads on the sites.
An Amazon spokesman added, in a statement: “The methodology of this research is fundamentally flawed and misleading, and its conclusions are likely completely inaccurate. A very small number of advertisers and very small amount of ad spend occurred on these sites. We refunded any advertiser who advertised on these sites.”
“We have zero tolerance when it comes to content promoting child sexual abuse and exploitation,” said a Google spokesman. “As this report indicates, we took action on these sites last year. Our teams are constantly monitoring Google’s publisher network for this type of content, and we refer information to the appropriate authorities.”
Adalytics found no ads served to the Urlscan.io bot via Google properties since July 2024. The report does, however, show a Google Pixel ad served via the Amazon DSP next to adult porn on ibb.co in November 2024.
Seven advertiser and agency executives, who spoke on background, said their log files showed DV or IAS had classified placements on imgbb.com and ibb.co as “100% brand safe.” The spending was small, however, and because the advertisers weren’t able to get URL-level data, only domain-level data, from Amazon and Google DSPs, they weren’t able to tell if their ads appeared next to adult or child porn.
Several executives said it’s important for DSPs to provide page-level log files, not just domain names, for monitoring such placements and checking the performance of ad verification companies, but that Amazon and Google are notable for not providing such detail. Amazon and Google didn’t comment on whether they provide URL-level data. Others, such as Criteo, Microsoft and Outbrain, declined to comment on whether they do. The Trade Desk does, advertisers said.
A statement from IAS said the company "has zero tolerance for any illegal activity, and we strongly condemn any conduct related to child sexual abuse material. We are reviewing the allegations and remain focused on ensuring media safety for all of our customers."
DV, in a blog post, while criticizing Adalytics for allegedly misunderstanding how its ad tech worked and claiming problematic images from the sites amounted to a small amount of ad traffic, promised “an additional comprehensive review of ad-supported image-hosting sites on the open web that are within our system” and placing them under stricter classification standards.
DV also said it’s “defining a mechanism” to block anonymous image-hosting sites and creating a site-level avoidance category to protect clients from advertising on peer-to-peer photo sharing and streaming domains and apps “that could be abused by hosting or distributing illegal content.”
DV CEO Mark Zagorski also sits on the board of Outbrain, whose DSP was observed serving ads on ibb.co pages with adult porn until late Feb. 7, after the senators’ letters and DV’s blog post were published. DV didn’t respond to a query as to whether Zagorski or DV had ever passed along information about the site to Outbrain.
Outbrain in a statement said: “Our publisher guidelines explicitly bar sexual content of any form, most importantly content promoting illegal activities. We do not condone abuse of any kind, and strongly stand against the abuse of children.”
Criteo and Microsoft were also cited in the Adalytics report and independently confirmed by Ad Age to have served ads on imgbb.com and ibb.co, though they were not named in the senators’ letter. Flashtalking was observed by Ad Age serving ads to adult porn pages on ibb.co through late last week.
Criteo and Microsoft spokespeople both said that their companies stopped serving ads on the sites, which violated their policies and eluded their detection efforts, shortly after they were contacted by Ad Age. “We are reviewing our process to minimize the chance of this happening again,” said a Criteo spokesperson.
More on DSPs: ANA finds decline in digital waste, higher transaction costs
Where were the watchdogs?
The senators’ letters sharply criticized two of the ad industry’s leading watchdogs, TAG and the MRC, saying their “actions here—or at best, inaction—have raised several concerns.”
Despite TAG’s “Brand Safety Certified” guidelines calling for blocking ads on explicit sexual content and content that violates human rights, at least nine companies certified under those guidelines placed ads on imgbb.com, the letter to the company said.
A separate letter from the two senators takes the MRC to task for oversight of DV and IAS, which have ad verification accreditations that include brand safety on the open, but only at a domain or “property” level. According to people familiar with the matter, MRC accreditation for the verification vendors doesn’t apply to page-level content, only to classifications of domains as a whole. While many DSPs and supply-side platforms involved in monetizing the sites also have MRC accreditations, those appear not to cover brand safety on the open web. MRC CEO George Ivie declined to comment as he worked on a response to questions posed by the senators.
TAG sees ‘publicity stunt’
In an interview on Feb. 11, TAG CEO Mike Zaneis said he still hadn’t directly received the letter from the senators, and described their move and the Adalytics report as a publicity stunt.
“We don’t know for sure that ads were actually appearing next to child pornography content, and we don’t know if those pages were actually being indexed for brand safety and other measurement. That is one of the big acknowledged caveats from the research,” Zaneis said. “Adalytics is taking a leap of faith to connect all of that. But I get it. It’s great publicity, right? This report gets probably a C- as far as methodology and conclusions, but it gets an A+ for publicity stunt.”
Adalytics CEO Krzysztof Franaszek emailed a response to Ad Age.
“For leaders in ad tech to suggest Adalytics would falsely report CSAM to law enforcement for the sake of a ‘publicity stunt’ is exactly the cynicism that prevents serious concerted industry action to take the profit out of a profound social evil,” Franaszek said.
“Adalytics can confirm—unequivocally and on the record—that ads from numerous Fortune 500 brands were indeed served directly adjacent to child sexual abuse material,” he added. “Screenshots of this evidence were not included in the report, as the intentional viewing, copying or distribution of CSAM is a violation of federal law.”
Adalytics reported suspected CSAM to federal authorities, NCMEC and C3P in January shortly after finding them, its report noted.
Asked why so many TAG “brand safety certified” companies were serving brand ads on pages with adult porn, Zaneis said: “This is a cautionary tale for advertisers that want to appear on user-generated content, period, full stop. It’s a story as old as the internet. You know you have to be very careful about your UGC policy, and that’s why a lot of advertisers just will not appear on UGC, because it is so difficult to control every single piece of content, because it changes quickly. … You can’t even index some of these pages, so you can’t really search them. It gets very difficult.”
Some TAG-certified companies, including The Trade Desk and Google, had cut off imgbb.com and ibb.co well before the senators’ letters or Adalytics report were released. While TAG has a Threat Exchange where certified companies can share information about problem sites, Zaneis said they didn’t in this case.
“Right now, primarily, the Threat Exchange deals with malware attacks that harm consumers,” he said. “That’s our focus at the moment … not a clearinghouse for bad content publishers.”
Advertising watchdog group Check My Ads filed complaints with TAG on Feb. 9 citing apparent failure of brand safety-certified companies to follow TAG requirements.
The group sent inquiry letters to each of those companies on Feb. 11, Zaneis said.
“We take every single complaint very seriously,” he said. But he said CheckMyAds has been submitting complaints to TAG since its founding in 2021 and that none of the complaints, which he described as “frivolous,” had resulted in companies losing certification.