Many of us have been here: We undertake a marketing program with promises of tracking "everything" -- clicks, click-through rates and conversions. CEOs want clear answers, so with heady promises everyone proceeds with high expectations.
Yet, when it comes time to actually report results, letdowns are frequent because: 1) something went wrong with the tracking technology, 2) the client/vendor infrastructure went down or flaked out and 3) what you thought you would learn you didn't because the test was ill-constructed.
So for fun, I decided to poll some senior marketing/agencies colleagues of mine. I asked them "How often are you able to accurately measure the effectiveness of digital marketing campaigns?" and the poll results were sobering (OK, OK -- this is an unverified, non-projectable sample size of 32):
- 53% could not measure digital results at all due to some technical "glitch" either on the vendor or the client side
- 35% could measure only some of what was expected to be measured
- 12% could measure much/all of what they wanted
Now before all you testing agencies deluge me with sales pitches, the main takeaway is that clients' increasing expectations of online measurability have created a new myth of online marketing as the "perfect" marketing measurability machine. And this myth, while it may sound wonderful, often does more harm than good.
We all know for that for decades marketers have needed to measure advertising to understand what works. In the '80s, packaged goods marketers conducted sophisticated test plans with matched test markets to determine how effective an ad campaign was. While the stakes were high, interestingly, the metrics were pretty basic -- market sales.
As marketing technology evolved, so it seemed did marketers' expectations of measurability. Direct marketing began to be evaluated on an ROI basis, a metric that, interestingly enough, brand advertising was rarely subject to. Then online advertising programs drove a new level of expectations. These campaigns could track number of clicks, where visitors clicked, where they didn't, when they clicked -- just about anything, or so the myth went.
But in the real world, getting online technology testing right can be tough and fussy. Reference codes didn't get on the site properly. The re-marketing program has a proprietary system that does not allow for in-house verification. There is a marked disconnect between an "optimized" pay-per-click landing page and actual business results. A variable testing structure that can't possibly get at the answer you really wanted.
This is the current, imprecise state of technology testing today -- and it is a far cry from the mythic land of perfect measurability, characterized so often in pre-test meetings. And then if the test metrics prove to be unreliable, try explaining that to the CEO (not a pretty sight).
Maybe it's time we all start dialing back the measurability rhetoric because the promise rarely lives up to the expectations anyway. Stuff can and does go wrong -- technology stuff, plan-test stuff, real-world stuff. Then, on top of all the things that can go wrong, there is still some amount of guesswork required to interpret results because you need to explain why certain behaviors did or did not work. No raw metric can answer that.
Here at Paltalk, we tend to keep technology testing metrics as simple as we can. We let the numbers provide guidance but we never ever allow ourselves to be a slave to results. Finally, we allow for the unexpectedness of real human behaviors.
Using marketing testing this way requires a bit of a balancing act between CEOs' desire to know everything and real world practicalities about what is accurately measurable. But if you can pull it off, it is worth it because it's how everybody gets satisfaction.
~ ~ ~
Judy Shapiro is senior VP at Paltalk and has held senior marketing positions at Comodo, Computer Associates, Lucent Technologies, AT&T and Bell Labs. Her blog, Trench Wars, provides insights on how to create business value on the internet.