This is your sixth of seven free items this month.

To register, get added benefits and unlimited access to articles, Become a Member. Already a Member? Sign in.

In Defense of Experimental Advertising

Why It's the Only Way Marketers Can Determine What Half of Their Campaign Is Wasted

By Published on . 7

Duncan Watts
Duncan Watts

How is it that in spite of the incredible scientific and technological boom since John Wanamaker's time -- penicillin, DNA, space flight, supercomputers, the internet -- his puzzlement as to which half of money spent on advertising is wasted remains as relevant today as it was nearly a century ago?

It's certainly not because advertisers haven't gotten better at measuring things. Between their own electronic sales databases, third-party ratings agencies such as Nielsen and ComScore, and the recent tidal wave of online data, advertisers can measure many more variables with much greater precision than Wanamaker could. No, the real source of the problem is that what advertisers want to know is whether their advertising is causing increased sales -- but, for the most part, all they can measure is the correlation between the two.

Everyone, of course, "knows" that correlation is not causation, but it is surprisingly easy to get the two confused in practice. Let's say a new-product launch is accompanied by an advertising campaign, and the product sells like hot cakes. It's tempting to conclude that the campaign was a success. But what if it was simply a great product that would have sold just as well with no advertising at all, or if a different campaign would have generated twice as many sales for the same cost? Well, then clearly some of that money was wasted. Or let's say an advertiser pays a premium to reach consumers that it thinks are likely to be interested in its product. Again, this seems reasonable -- no one would market diapers to teenage boys -- but what if some, possibly many, of the interested consumers would have bought the product anyway? In that case, then once again at least some advertising was wasted.

Distinguishing causality from mere correlation, in other words, requires one to measure not only what happens in the presence of advertising, but also in its absence. Or put another way, "advertising effectiveness" is meaningful only when measured relative to some "control" state, which could be nothing at all, or could be some alternative "treatment," such as a different campaign.

Versions of this measurement problem arise in science as well as in advertising, but in science there is a standard solution: Run an experiment. In medicine, for example, a drug is supposed to be approved only after it has been subjected to experimental studies in which one group of people (the treatment group) receives the drug, and a different group (the control) receives either nothing, or a placebo. Only if the treatment group yields consistently better results than the control group can the drug company claim that the drug "works" -- no matter how many people taking the drug appear to benefit from it.

The idea of measuring advertising effectiveness with controlled experiments enjoyed a burst of enthusiasm in the 1970s and 1980s, and some direct-mail advertisers still run them; however, the practice of routinely including control groups into advertising campaigns, for TV, word-of-mouth, and even brand advertising, has largely been set aside in favor of observational data, statistical models or even just gut instinct.

ABOUT THE AUTHOR
Duncan J. Watts is a principal research scientist at Yahoo Research, where he directs the Human Social Dynamics Group.

No doubt at least some of this reticence can be attributed to the complexity and expense of running experiments "in the field"; but this situation has changed dramatically with the advent of online advertising. Recently, my colleagues at Yahoo, David Reiley, Taylor Schreiner and Randall Lewis, demonstrated the potential for online experimental advertising with an extraordinary field experiment involving more than 1.5 million Yahoo users who were also customers of a large retailer. They randomly assigned 1.2 million individuals to the treatment group that was subsequently shown display ads for the retailer, while the remaining 300,000 (the control group) were shown nothing. Because all the subjects were also in the retailer's database, the researchers could observe both their online and also in-store purchases, and because the assignment was random, any differences in purchasing behavior could be attributed to the advertising itself. By following an experimental approach, therefore, the researchers were able to estimate that the additional revenue generated by the advertising was at least four times the cost of the campaign in the short run, and possibly much higher over the long run.

Reiley et al concluded that advertising works -- a welcome result both for the advertiser and publisher -- but it's important to note that even if they had found that the ads were not effective, the experiment itself would still have been worthwhile: Just as in science, finding that something doesn't work should not be regarded as a failure, but rather a step toward discovering what does. Nor is this lesson restricted to online advertising. True, the web has some advantages over traditional media in terms of measurability and cost, but the basic principle is the same everywhere: Advertisers can only learn which "half" of their advertising is wasted -- and thereby waste less of it -- by integrating an experimental element into everything they do. Otherwise, a century from now they will still be reciting Wanamaker's curse.

In this article:

Comments (7)

Read These Next