Advertising moves to made-up metrics

By Published on .

It's getting complicated out there. Growing inconsistencies in government statistics have led the watchful economist to swallow hard and caution, "We must learn to live with a less measurable world."

Advertising's own little data patch is growing bald. Probability sampling -- the technique that separates research from augury -- no longer exists. Nielsen TV response rates are now below 40%, which cannot, on the face of it, be probability sampling.

Science is giving way to convenience in the way we gather all of our numbers. Who but an opportunist would try to measure 200 magazines in a single survey? Who but a cynic would still use a household diary. And the pressure to count and quantify increases each day.

The scientist in us wants a rational approach to decision-mak- ing based on substantial data. But the guy who does the work in us understands that much of the data we need will never be available, and much of the data we have are not substantial. The discomfort is not in making do with deficient data. It's with our growing need to make the numbers up.


In advertising, made-up metrics is the backside of accountability. Management pressure to reduce "judgment calls" to get more rational spending decisions creates systems that require numbers that just aren't there. These are called "expert systems."

In its simple form, an expert system is a decision model in which all the data needed to calculate the answer are not available -- so some have to be supplied by committee ("the experts").

In media expert systems -- for example a media-mix optimizer -- costs and audiences are available from invoices and surveys, but the value of different media executions in producing the desired response (a magazine color page ad exposure compared to a 30-second TV spot exposure) are not. These need to be supplied by the user in the form of a value -- like TV equals 100; magazines equal 82 (which means, in this case, a magazine exposure is 82% as effective as a TV exposure).


Now bad value estimates are just as wrong as bad cost data, so the obvious question is who gets to vote? If it's everyone on the account, there's a problem because they don't know enough. Expert systems need real experts more than they need large samples.

If it's restricted to the account manager, the client and the creative director there's a different problem. These people will learn to use the system to control the result.

Historically, users have had so little confidence in the systems they've been given -- and the values they've been forced to supply -- that when the plans produced seem different, they vote again and again until they get the familiar. That is, what they would have gotten had the plan been done without the expert system. But that is using a new model to get old answers, and hardly what the press-clippings claim.


Real experts hate expert systems because they give the less expert too great a voice. Nobel laureate Richard Feynman, once part of a "Delphi survey" of the world's leading physicists, expressed his distaste this way: "I know some of these people are smarter than I am. But I'm damn sure my thinking is better than their average thinking."

Media's own ranking research expert, Andrew Ehrenberg, feels like Mr. Feynman. "In days when decision trees ruled," he wrote, "companies gave the task of assigning probabilities to the latest MBA recruit, who knew all about probabilities."

Model builders argue that the kinds of value decisions these systems require are made all the time, and that an expert system simply "smokes them out," makes them subject to scrutiny and discussion by the group, which in itself leads to better decisions. I'm not sure it does.

A 1971 Time essay (filed away by MindShare's Jon Swallen) focused on the corrupting role of made-up numbers.

"Imaginary numbers can delude even the shrewdest of leaders. For years the Pentagon demanded imaginary numbers from combat troops in Vietnam: body counts, kill ratios . . . villages free from Viet Cong control. With [these], the computers could declare with statistical certainty that the war was being won. Is it a coincidence . . . that the most elaborately measured war in American history was also the least successful?"

In advertising, expert systems try to produce better outcomes by reducing the role of judgment. Instead, they produce predictable outcomes. And they don't solve the true bottleneck in making better decisions.

For an expert system to work, you need real experts.

Mr. Ephron is a partner at Ephron Papazian & Ephron, New York (

In this article:
Most Popular