Anyone in or around the ad-tech industry will tell you the same thing: Startups often promise agencies and marketers the world to win business. The truth is often something different, making the vendor-evaluation process crucial.
As the industry matures, the processes by which marketers and agencies select ad-tech vendors -- whether a demand-side platform, a retargeting firm, or an ad network -- have evolved, say those on both sides of the process.
But there are still common flaws that can lead to poor choices. Take, for example, the classic approach of pitting two or more vendors against each other in the ad-tech industry's version of a bakeoff. Often, the agency or marketer will allocate each of the tech firms something like $10,000 or $25,000 or $50,000 of media spend and some direction on the goals they are trying to achieve with a campaign.
With proper test design and controls in place, the marketer or agency can ensure that the opposing firms aren't, for example, bidding against each other for the same ad inventory.
"In general, companies that invest in a robust selection process get some pretty big dividends from that," said Philip Smolin, senior VP-marketing solutions at Turn.
Yet these tests often contain loopholes that allow firms to game them, according to industry executives. One example is when vendors fulfill the test campaign to give an advertiser the results it wants, but lose money in the process. The thinking for vendors is that it is worth it to lose a bit of money in order to win long-term business. The problem here is obvious: Once they win the business, they will need to make a profit and will have to change tactics to do so.
This usually occurs in one of two ways, according to Marc Grabowski, chief operating officer at Facebook-ad firm Nanigans. The vendor will either reduce the number of "acquisitions" it wins for the marketer or it will target customers who are cheaper to reach but who do not have a high lifetime value to the marketer.
Further, Mr. Grabowski said it's not uncommon to see firms arbitrarily allocate a round number—say $10,000 or $25,000 -- to testing without computing whether that will be enough money to create a statistically significant test. This is especially true in situations where advertisers want to put ads in front of dozens, if not 100, different combinations of customer segments.
"If you spend $100 on a segment and receive two acquisitions, will this number of acquisitions increase linearly as spend increases?" he asked rhetorically in an email. "Is the sample size truly representative of the entire inventory and targeting set?"
For agencies, there can be the temptation to simplify the evaluation process by only choosing one partner in each section of the industry. And, on the surface, it can make sense. Yet, that philosophy doesn't always work, according to Josh Jacobs, president of Omnicom trading desk Accuen. One of his first meetings there was with a group of employees trying to pick "the best" demand-side platform. Instead, in analyzing results the team realized that not all partners offered the same thing or reached the same base of web users.
"What we learned pretty quickly was that there wasn't a best DSP overall and that we had different reasons on different campaigns to work with different ones," he said.