Why you should always test data before you buy it
It’s not an educated choice when you’ve simply picked the most-convenient option
It’s not an educated choice when you’ve simply picked the most-convenient option.
Marketers and sales teams are flooded with data options, from stand-alone pure-play providers to packaged deals that group together data, media, services and technology. In 2018, companies spent $19.2 billion on third-party audience data for advertising and marketing efforts, according to IAB. The various options available offer different degrees of value and efficiency to marketing organizations, leaving them to make a choice about what fits their needs.
Yet for all of the value derived from using data, and for all of the choices available, marketing teams rarely test this asset before making a purchase commitment. Many organizations have possibly tricked themselves into thinking they’ve made an educated choice, when all they’ve done is pick the most-convenient option.
Whether it’s pure-play data only or a packaged data-and-services or platform product, marketers and sales teams need to isolate and test the data that will eventually power their efforts. A failure to accurately evaluate means that the marketer doesn’t know what it has, and it doesn’t know what it might be missing.
Buying data and services can resemble buying a stereo system. It’s possible to walk into a big-box store and purchase a single system that has all the necessary components to enjoy music in your home. Or, you could visit a specialty store and listen to dozens of different speakers, receivers and amplifiers. People who truly care about sound are going to isolate the different components and think about what each piece brings to the listening experience. Marketing in the digital age is no different.
The biggest reason to isolate and test data on its own is that testing allows marketing and sales teams to make conscious decisions about the tools they use, and how strong or weak the actual data signal is, before even contemplating how that data will integrate into their workflows. Fortunately, testing the data requires only a few easy steps.
Test with the tactics you’re already using
One reason so many marketers avoid testing is that they see it as a costly and time-consuming experiment. But if done within campaigns a marketer is already running, testing shouldn’t cost additional money or time. If you’re already advertising on LinkedIn, use LinkedIn for the test. If you already have a sales development team making outbound calls and sending outbound email, then use that motion as a testing ground. There’s no reason to run new or separate campaigns and no reason to allocate a new dedicated testing budget. Instead, allocate existing media budget toward testing.
This approach kills two birds with one stone: It helps marketers understand the quality of signal and, because the test is performed within the campaigns the marketer is already running, puts an operational process in place. If the data adds value, then the marketer is already set up to activate the data, with little additional work.
Don’t focus on accounts in the pipeline
For b-to-b marketers to get the clearest results, they need to avoid using the data on accounts already in the pipeline. If a target account is near closing, then it becomes incredibly difficult to isolate which factors finally put the account over the edge. Was it the data that was used to deliver some final messaging, or another factor? The lack of clarity contaminates the results.
Instead, marketers should focus on new accounts and targets, performing clear A/B analyses after the campaign experiment. This makes it easy to see what kind of impact the data delivered in the test, compared to either another provider in a bake-off, or a control campaign performed without any kind of supplemental data.
Control the experiment so no source has an advantage
Another important factor for getting clean results is distributing the data signal evenly, in order to control variables. Ideally, testing should be done across both the best and worst channels and marketing activities. If that’s too ambitious, and the data can be tested only across sales, organizations must make sure the data is shared with several salespeople—and not just the best or the worst performers. If the test can be done only in email, it should be done across a few different campaigns.
The goal is to perform enough tests within a program to receive representative results. Returning to our first point, this doesn’t require a new budget investment—it simply requires dedicated budget within the activities already underway.
On the same note, tests should also be done independently of the sales pitch, or the relationship between the marketer and provider. These factors almost certainly play a role in the final purchase decisions but, whenever possible, they must be quarantined in order for the test results to stand on their own.
When the test is complete, there’s also no rule that says marketers must pick the best-performing data. If there’s minimal performance difference between stand-alone data and data that comes packaged with a platform and services, then it might make sense to go with the packaged data, especially if the organization finds it easier to use right away. What matters is that marketers make informed decisions about what they are buying. Whether it’s a bake-off between two providers, or a simple evaluation of the data within a platform, marketers must test. There’s no reason not to.