Mini, Nike, Volkswagen, Budweiser and Target - what does this selection of the nation's current favorite commercially successful and creatively lauded campaigns have in common? None are copytested in advance. This only adds fuel to the many critics of a copytesting system damned as increasingly outmoded; one where irrelevant questions are being asked by the wrong people. "You have to take a stance on your brand," Martin says. "Often, copytesting becomes so diluted that it purports to mean everything to all people, but means nothing to anyone. Consumers are very trained to respond to what they know."
The issue of whether copytesting is a cancer on creativity is hardly new, of course. Was there ever a creative who did not recoil at the words "let's test"? But copytesting is currently in the spotlight again, as clients and agencies alike seek ways to force more creative attention-grabbing ideas through the ever more fragmented media landscape. The issue was brought into focus by the recent high-profile trip of a group of senior Procter & Gamble marketers to the Cannes International Advertising Festival. The client and its agencies appear to agree that if the trip is to have any meaningful future impact, P&G has to reassess its rigid copytesting system.
With less fanfare, other marketers are conducting similar exercises. At great expense, Unilever last year put over 60 commercials through the three different copytesting arrangements previously in place at the Helene Curtis, Cheseborough Ponds and Lever Bros. units that it had merged to form Unilever Home and Personal Care. Companies like Ameritest and newcomers Hall & Partners are now using a less purely numerical approach to challenge the dominance of the traditional market leaders: The ARS Group (the U.S.' largest); the WPP-owned international powerhouse Millward Brown; Ipsos-ASI (where most of P&G's work is tested); Diagnostic Research; Roper ASW; Mapes & Ross; and MSW. For many, it couldn't be soon enough. "Over the past 20 years, the one thing in the business that really hasn't changed is how we research," argues Mark Tutssel, deputy chief creative officer at Leo Burnett USA, and an outspoken critic of copytesting. "Ideas need room to breathe, and testing is a form of suffocation. It pushes work to the lowest common denominator. The crux of the problem is that copytesting is a formula. We don't extract meaning, clients take it verbatim. ASI testing has become the decisionmaker. I would love to find creative researchers; researchers who have the ability to interpret. All they do is regurgitate. It's worked for 20 years, but we can't keep the system. We have no divine right to people's attention." He cites Burnett research that finds of the 2,904 commercial messages consumers receive a day, we pay attention to 52, and that shows that consumers believe brands are becoming more similar in some 46 categories.
This is where the real debate over the value of the tests resides. Most traditional copytests are predicated on the notion that there is some clear, inherent benefit in a product that needs communicating to the consumer. Consumers are asked questions in relation to the ad's success in persuading them of that benefit, where today, in the age of product parity, many - arguably most - ads don't even try. American copytesting (it varies around the world) has its roots in this persuasion shift, whatever the technique - clutter test, recall test, one-on-one interview - used to get there. Tests vary considerably, but nearly all tests have a question along the lines of, "Can you remember it?" and another asking, "How likely are you to buy the product as a result?"
Whether it be Millward Brown's Link system or Ameritest's "flow of attention" and "flow of emotion" scoring, the likes of Tutssel and other leading creative directors are hugely frustrated by the entire process. "Focus groups and copytesting have infected our industry and we need to get together to get them out," says Alex Bogusky, executive creative director at Mini's agency, Crispin Porter + Bogusky, arguably America's hottest shop right now. "We all know it and yet we still use it as a way to make our jobs easier. We're too lazy to have a vigorous intellectual argument to decide how to move forward with our advertising ideas. So instead we say, 'Lets test it. Then we'll know.' It's gotten so it's almost irresponsible. Some people are ashamed that our industry could be based on something as difficult to define as art or craft. We should be celebrating this or at least not hiding from it. The fact is, the consumer is never going to help you create something they haven't seen or heard before."
"Invariably, I walk out and think, 'This is not how human beings function and feel things,' " says Logan Wilmont, joint executive creative director at Kirshenbaum & Bond, one of Target's agencies. "It pulls you into an absolute frame of mind of the nuance of one word vs. another. In the end, clients cannot see the wood for the trees. They're so bogged down in the minutiae, they can't track an argument. It's a conspiracy of mediocrity. The whole thing is designed to remove interpretation." If he had his way, "I wouldn't copytest at all," says Wilmont. "If you use planning properly and pretest, you won't change Lord Leverhulme's odds ['Half of my advertising budget is wasted; the trouble is I do not know which half.']. "There's no accountability back to the copytesting. The researchers have created a bizarre situation where they're extensions of the clients' marketing departments. I find it shocking. It's made clients less intuitive. Apart from anything else, they don't experience what marketing is all about. They deny themselves the opportunity to reach out for greatness."
To some, there is an even more fundamental flaw in the entire process. Emma Cookson is the planning director at Bartle Bogle Hegarty in New York. As such, she concedes, she is not as vehemently opposed to copytesting as a creative might be, because her next pay raise or career prospects do not hinge in the same way on the outcome of executional research. However, she does have wider-ranging issues with the research. "It's true that agencies can be too precious," she says. "It's true that clients are too risk-averse, looking to minimize problems rather than maximize opportunities, but you can't say that all pretesting is bollocks. You'll find out if an ad is pretty good or pretty bad. However, the assumption of a lot of research is that man is a solitary creature. We show consumers the ad and ask them what they think of it. This doesn't take into account the effect of an ad being out there in the public domain. Nowadays, the biggest campaigns get into the culture and take on a life of their own. This is something it's just not possible to pick up on in testing. You get what you test for. The Gap or Joe Boxer underwear? How do you copytest for those? What are the benefits or persuasions?"
Her concerns are shared elsewhere. Not content with the post-merger tests mentioned above, Ameritest CEO Chuck Young and client Unilever's John Kastenholz, who is VP-consumer and market insight, home and personal care North America, have conducted a second study of copytesting with damning findings, particularly about recall testing. Not least is the claim that there is a strong correlation between a good recall score and a boring ad. Unilever has spent some $5 million researching research. Kastenholz is clearly on a mission. That's obvious in the fact that the U.S. is the only market where Unilever uses Ameritest in addition to its global arrangement with WPP's Millward Brown. "The understanding is more important than the responses," explains Kastenholtz. "Too often, marketers just want the key numbers: is it good or bad? Agencies find this frustrating. To me, the agency executive should be the client for the research. The biggest challenge of all is how to have a discussion about research results. A market researcher not expert in advertising just turns creatives off. Sometimes they refuse to attend. We've tried to make this more about the diagnostics."
Kastenholz tells of holding meetings at agencies, refusing to have them without creatives present, shortening the timescale of testing and other measures to make the whole process less adversarial and tense. Arguing that he has never had a problem with consumers imagining an ad's high production values, he also notes that for ads that are heavily character-dependent, he doesn't test "until the ads are in the can." But how does he refute Wilmont's charge: "Clients accept the answers rather than seek an interpretation. Clients want things to be quantified. It is impossible; it's a guarantee of mediocrity."?
"I have plenty of examples of how copytesting itself does not stifle creativity," Kastenholz responds. "It's the lack of an open-minded client that is the issue. For us, it's $20 million to $60 million a go on a campaign. We have to justify the numbers upward. Every cent has to work as hard as it can. But it is we ourselves who are the guilty parties. Sometimes we have no choice but to put out average ads." In an echo of Tutssel's lament, Kastenholz fingers the quality of most researchers as being the second major bugbear after the outmoded nature of the questions. In his ideal world, more advertising planners would spend stints as researchers. "In the early '90s, I realized that marketers were just using test scores to beat often furious creatives over the head as a stick," he says. "In general, most researchers are professional researchers, not ad experts. They talk about results as if discussing inanimate objects, or regurgitate garbage rules like, 'Show the pack shot in the first five seconds, and then at least three more times in the remaining 25.' So telling creatives how to 'fix' their ad like this is not going to work."
So, the obvious question remains: if advertising and the media have both so changed, then why hasn't copytesting (Hall and Ameritest notwithstanding), and will it now change? Especially when the likes of Young and Kastenholz have spent so much money to prove, for example, that recall testing only rewards one kind of heavy-sell, rational advertising? "Emotion has never been irrelevant to selling, but 20 years ago there was more product benefit differentiation," Young says. "Coca-Cola was not built on a lot of facts. We have just tended to forget that. Advertising never pays for itself in the short term. Promotions always do. We think this is a category that has sat on its ass for too long and we're trying to stir the pot. Too many people have stuck their heads in the sand for too long and ignored the issue. The best way to get rid of a paradigm is to have the old generation die off. Everyone grew up with this system."
There is no suggestion that copytesting will do anything but become more prevalent. Most companies are not like Mini. Marketers within corporations will only come under yet more pressure to justify the numbers upward. "Executional research is not an ad industry phenomenon," points out BBH's Cookson. "Look at Hollywood, look at TV. They're literally testing the endings of movies and shows. Wherever you have an industry where creative ideas are being used to affect commercial decisions, you're going to have business seek more knowledge."
And so, it would seem, until the P & G-trained generation passes, and companies like Hall and Ameritest and others spread a more emotion-based system with less stress on the persuasion of product benefits, antipathy will remain the status quo. We will still see researchers using their numbers as a stick with which to beat agency creatives, and we will still see bitter and frustrated creatives hostile to the very notion of testing and its adherents. Worst of all, the most imaginative creative ideas will continue to be chipped away, pared down, and sanitized by the tyranny of "the norm."