If you're ever "lucky" enough to be part of a clinical trial for a proposed medication or treatment, you'll be told that you'll have a 50% chance of being administered the actual drug being tested, and a 50% chance that you'll be receiving something that looks like the medication without the active chemical in it. You'll simply take that pill you're assigned, as a part of your routine each day, and make regular visits to a doctor to determine whether the condition you're trying to affect is improving.
Clinical trials using control groups -- half the patients get real medicine; half get a placebo -- are usually set up as a "double-blind" test. Not only you, the patient, have no idea whether or not you're receiving the actual treatment being tested; neither does the doctor who is performing the regular evaluation of your progress. These precautions are taken because, sometimes, the idea of receiving treatment can actually create physical -- or, at a minimum, perceived -- improvement. Even the doctor evaluating you isn't immune to experimenter bias, and might "see" signs of improvement that aren't actually there, if they believe you're receiving some sort of treatment.
What's baffling to me is that while we're all aware we can be fooled by this placebo effect, we seldom apply our understanding of it to market research -- especially as it pertains to creative or concept testing.
Think about it. Despite the fact that we don't trust people enough to honestly assess whether they feel better or not after taking a medication, we're more than willing to put an ad in front of someone and ask them to assess whether or not it will motivate a purchase. Seems pretty nonsensical, doesn't it?
Unfortunately, our humanity tricks us into perceiving that we know exactly what motivates us to think the things we think, do the things we do, and buy the things we buy. We mistakenly believe ourselves to be very rational beings, always weighing decisions based on input and facts we're actively aware of.
But, we're not, and we don't.
It's not easy for us to admit this, or maybe even comprehend it. But advances in neuroscience and behavioral psychology have proven that we're often moved to action by factors that we're not consciously aware of. Examples abound: the temperature of a drink in our hands can influence how we assess the person we're interviewing; the legibility of a typeface can influence how long we think a recipe will take; simply writing down a random number can influence how much we might bid on an item we're interested in buying.
Our mistaken belief that we explicitly know why we do what we do fools us into thinking that consumers can objectively assess how an ad affects them.
I'm here to stand on my soapbox to say, they can't.
This means that you shouldn't test messaging by putting multiple options of an ad -- or a headline, or a name, or a logo, or even packaging -- in front of a consumer to ask which will likely motivate them to buy. In fact, they should never see more than one option that you're testing.
Rather, a monadic test, where a respondent only sees one stimulus, is the most effective means to garner legitimate feedback. And, ideally, the methodology will also involve disguising the stimulus so that the respondent isn't aware of what is being evaluated. In behavioral psychology, in order to most accurately assess what is motivating a behavior, experiments are constructed so that the respondent has no idea what stimulus or behavior is being tested.
The only real downside of monadic testing is the cost, as it requires more total respondents. You need a group of respondents for each independent variable being evaluated, allowing the results to be compared between groups. If your boss is telling you, "Look, our budgets are already too small. We just need to do something less expensive," then I would recommend not wasting money on testing at all.
While a low-budget shortcut might assuage others in your company, you should help them understand the pitfalls. Help them see that it's actually more reckless than not doing a test at all, because flawed methodologies create a level of false security. Help them understand that the "voice of the consumer" can't always be trusted.