Opinion: How to bake humanity into health care marketing data in 2021
Data seemed to betray us in 2020. Here are the reasons behind common data missteps—and remedies every health care marketer can adopt.
In public health, policy makers focused on the overall pandemic mortality rate in the population to drive public health measures aimed at curbing COVID-19. Many believe they under-protected some vulnerable groups and overprotected others because they failed to consider the data at a segment level.
In politics, observers are still wondering how the 2020 election pollsters could have gotten the electorate’s intentions so wrong. How could that many Trump voters come out of the woodwork on Election Day?
In health care, artificial intelligence and machine learning now diagnose skin cancer, detect strokes on a CT scan and help manage COVID patients. Even this sophisticated computing innovation is subject to human biases in the data used to “train” the software.
We’d like to trust our marketing research data, but we’re in the same boat as the pollsters, the policy makers and the medical scientists. When measuring human attitudes, intentions or reactions, we must account for human vicissitudes in research design, numerous subject biases both conscious and unconscious and faulty data interpretation. Humanity, not mathematics, is to blame.
Complex responses-subject biases
Asking people questions is a trickier proposition than you might think. People often say one thing yet do another. Imagine the psychology at work when a patient is asked these questions. Have they been compliant in taking their medication, eating and exercising as advised?
We know that 60 to 80 percent of people are less than forthcoming with doctors. Patients explained they feared being judged or lectured or wanted to avoid embarrassment. Others sought to be viewed as compliant and responsible. This is called “social desirability bias.” This also suggests “demand characteristic” bias, in which a respondent provides the answer they think is wanted. These biases occur even in anonymous surveys.
Human psychology also affects studied behaviors. Subjects often modify their behavior because they are aware of being observed. This is called the “Hawthorne effect.”
Marketers sometimes inadvertently introduce what’s called “experimenter bias.” Your own expectations or beliefs could taint the project’s structure or interpretation.
Other data pitfalls include “information bias—also called “measurement bias.” This arises when study variables are inaccurately measured or classified. Then there’s “selection bias,” a data distortion caused when a sample selected does not accurately reflect the target population.
Your research’s accuracy will reflect the rigor applied to fight bias and the anonymity of the internet. Here’s how to bake humanity into your data:
Respect the science
While it is tempting to take shortcuts by designing your own research to save time or money, resist! Let the experts do their thing. They know where bias creeps in and will employ the best bias-busting techniques.
Diversify your data
There is no better insurance policy in the data game than diversification of sources and methodologies. It’s awesome when data sources corroborate one another. Tap data sources with differing methodologies or compare first party data collected directly from your audience with second- or third-party data purchased from an aggregator, as one rare disease client has done.
Dodge the setting trap
Social media research aggregates data from social media platforms, web forums, news and blogs. It delivers large data sets of real-life subject self-expression—great, right?
There are booby traps here, too. Recently, we’ve seen subjects post aggressive messages on Twitter but then engage with disease peers on Reddit with empathy. Which reveals the real person? Both. Don’t jettison any data or crown it king before considering the setting.
Throw it out
Inoculate your data against the Hawthorne effect by discarding the first wave results. At the outset subjects “remember” they are being studied. Over time, they forget.
Consider continual collection of “digital life” data
Some revolutionary studies employ data collected from digital communications and actions over an extended period. Subjects opt in. They generate data passively and unconsciously as they go about their daily digital lives; the data is nearly devoid of responder bias. This approach particularly suits studies around intent, (e.g., physicians’ intent to adopt a new procedure or product) or attitudes (patients’ feelings around their therapy.)
Monitor benchmarks and probe historical data
Understand what your data might look like at the outset by scouring similar research. If possible, get experts to share relevant benchmarks with you. Then establish your own.
Large data sets offer higher levels of confidence and blunt—but not remove—the effects of bias. If you can’t afford a large study, share costs with another party interested in the same insights, as a dermatology client did.
Prioritize your learning objectives
Research data can fall down because investigators take on too many objectives. Limit the hypotheses you test and the population(s) you engage.
Become a perpetual student
Nurture your ability to sniff out data that appears random, anomalistic or prone to systemic bias. View others’ work. Get the best studies relevant to your brand into your digital feeds.
Protect the integrity of your data with these measures and unlock more of its potential!