How Blank Display Ads Managed to Tot Up Some Impressive Numbers
This is the story of a blank display ad that notched twice the click-through rate of the average branding one.
It all started over lunch with my friend Charlie.
Maybe, I thought, there's an ambient mistake-click-rate on the web, like cosmic background noise. I wondered if that rate was high enough to create misleading conclusions about ad effectiveness or mess up the algorithms that drive automated buying and selling.
The online-ad ecosystem is constantly adjusting itself to place messages where they will get clicks. This learning loop takes mere minutes in the automated model.
Clicks are counted as a surrogate for attention, and still used as our most important currency (i.e. cost-per-click). They are also the principal signal in a control system that governs a giant machine.
Sure, every control system has a little noise in its signals. Sunspots cause garage doors to open, I suppose.
But in this case, the issue for advertisers would arise when clicks that mean nothing (noise) overwhelm the clicks that indicate, or result from, interest in the advertising message (signal). When the signal gets below some threshold—"you're breaking up!"—even a little noise can render it useless.
If indeed there are a lot of mistakes, those with low click rates are most exposed to the noise. And this is often the case for brands that absent a strong call to action, have click rates in the order of 0.02% to 0.04%.
So what is the mistake rate?
To find out, we built and trafficked an ad. But not just any ad.
The skunk works included an astrophysicist at online-analytics firm Moat, an ad-platform wizard from buying and optimization company Accordant Media, and a measurement maven from the Advertising Research Foundation. We equipped every ad with Moat's tag, and correlated that with traditional server-provided measures. Each ad was wired to reliably measure everything that happened to it, anywhere it ran.
The brief was simple: Create an ad that offered no message. Blank.
Surely, clicks on blank ads would qualify as noise.
We also enabled the ad to ask anyone who clicked: Why did you click? "Mistake" or "Curious"?
We created six blank ads in three IAB standard sizes, and two colors, white and orange. We trafficked the ads via a demand-side platform (DSP) with a low bid. We started with run of exchange, and in another phase trafficked to "named publishers" that would accept unaudited copy.
The average click-through rate across half a million ads served was 0.08%, which would be good for a brand campaign, and so-so for a direct response campaign. We detected no click fraud in the data we counted. Half the clickers told us they were curious, the other half admitted to a mistaken click. To obtain further insights, we tracked hovers, interactions, "mouse downs," heat maps—everything. (Heat maps detect click fraud because bots tend to click on the same spot every time.)
Our data suggest that about four clicks in every 10,000 impressions are unintentional, and there was some variance by site.
This does raise a question. What is a click? Is it just an indication of a person solving a little mystery along the route of his quest? Is it an experiment? Is it a nervous tick? Or all of the above?
Considering that clicks are the core of our digital nervous system, and the key to the online economic system, we know little.
At a minimum, the data suggest that if you think a click-through rate of 0.04% is an indication of anything in particular, you might be stone-cold wrong.
Is this research flawed? Yes, because we trafficked a blank, not an ad. Still, it's indicative that below some threshold, there is a lot of noise to confound our delicate signal.
And now it's over. The team will celebrate. The dinner bill might exceed the cost of the test , which was $480 dollars. That's a pretty good deal for a diagnostic check-up on a $100 billion machine, don't you think?
Ted McConnell is exec VP-digital for the Advertising Research Foundation.