By Published on .

Most Popular
It's been a remarkable few years.

No one in the U.S. thought much about optimization until 1997 when Procter & Gamble Co. made it the ticket-to-ride on its $1.2 billion search for a TV agency of record. No surprise our first optimizer agencies were all P&G agency of record and with strong overseas connections.

I suspect P&G knew it had pushed price-negotiation to the limit. Broadcast inventory was shrinking and with demand continuing strong, there would always be other advertisers willing to pay more.

Perhaps, the thinking goes, optimizer-directed negotiation could lower the cost of using TV. Implicit in this was the sly satisfaction of learning something about TV that sellers didn't know.


The timing couldn't have been better. In 1997, Recency theory was beginning to establish reach as the most important goal in media.

At the same time fragmentation was changing the granularity of TV from big to small ratings. That meant reach -- long synonymous with big ratings -- could no longer be bought that way. A new approach was needed and optimizers provided it. These computer programs are special-purpose tools for buying reach -- and work especially well with fragmented TV.

A bull market in optimizers followed, as every major media group rushed to install the programs and the Nielsen Media Research data needed to feed them. Today there are three off-the-shelf imports -- SuperMidas, X*pert and SpotOn -- used by a variety of agencies. Four domestics -- Leslie Wood's Wood Optimizer, Kevin Killian's TView, Telmar's Transmit and Smart's APT and a few bake-from-scratch, like Western Media's WestOpt and MediaCom's Maxis.


It's been more than a year now. Apart from making a few Brits rich and Nielsen richer, what have optimizers done?

Have the optimizers helped lower the cost of using TV as promised? The answer is "yes, no and maybe."

"Maybe" is future tense. The next upfront may see optimization at work on the media buyer's side. The "yes" is almost entirely psychological. The zero-based approach of optimizers probably influenced buyer thinking and sold more cable. The "no" is the problem of myth, overpromise and lots more work to be done. "No" makes the most-interesting story.

The first myth is "set-up the program, load the Nielsen tapes and optimize." It's not that simple.

When agencies unwrapped the package, they discovered optimizers aren't plug-and-play. The most-popular optimizers don't work with data straight from the viewer tapes, because they can't deal, handily, with all of the possible program combinations.

Besides, the reach of a telecast specific optimization exists only at that point in time and will not likely happen again.

So the first enormous job confronting agencies was how to prepare program and cost data for optimization. That required grouping telecasts in an intelligent way for the computer and obtaining accurate cost-estimates for each grouping.

It is not an easy job.

If the groups are too broad (like conventional dayparts), the power of the optimizer is lost and agencies might as well use reach curves and the back of an envelope.

If they are too specific, the optimized schedule is too unique to be executed -- TV schedules change that quickly -- and occasionally too idiosyncratic to be real.


The best approach appears to be to divide each network's inventory into small groups of like programs with like-audiences and like-costs (as Western does), but the inventory-management problem is difficult and has not been entirely solved. (Maxis and SpotOn take a different tack. The proprietary Maxis approach is, I think, especially well thoughout.)

The second myth is "Optimizers are better because they calculate reach directly from respondent data." Not true, reach needs to be modeled. Most optimizers work with program groups and there is no respondent viewing record for a program group. When the specific telecast is replaced by a group of telecasts in the database, the individual respondent's viewing record must also be replaced by a probability algorithm to estimate reach. This is just a new and better reach curve.


Optimizers that work off the Nielsen Cume Study (Wood and Killian) do not have respondent data with which to model reach, so they use other advanced statistical techniques to calculate daypart duplication.

In either case, the promised simplicity of respondent-level reach calculations -- "did viewer A see telecast B and not telecast C?" -- is not there. Instead, we are back to a reach and frequency "black box," where the calculations are understandable only to high-level mathematicians. Most users have no idea of how their optimizer calculates reach.

Along with the myths and tedious set-up work, agencies discovered concept problems. The most serious is optimizer programs don't consider the communication-value of different kinds of TV.

To them, a rating point is a rating point, be it "Frasier" or "Top Cops."


The optimizer selection decision is made entirely on the cost-per-new-viewer-added, so the resulting schedules are driven largely by cost per thousand. This is why optimizers thrive on fragmentation. When TV viewing is dispersed, they can reduce the cost of buying reach by using a wide-ranging mix of cheaper spots.

The bad thing about using mostly cheaper spots is all rating points are not the same.

We think they can differ in exposure-value, which is the probability of the viewer seeing and reacting to the TV message carried.

Small adjustments in the relative exposure-value of dayparts produce large changes in optimized schedules (see table above).

Today we include exposure-value in TV selection when we use daypart budgets.

Different dayparts have very different CPMs. When planners specify a mix of dayparts, they acknowledge that certain rating points are worth more, (prime time, for example), by their willingness to pay a higher CPM when cheaper dayparts are available.


Optimizers do not use daypart budgets. They ignore rating-level and make no provision for exposure-value. They turn all of TV into a commodity priced on reach. They optimize cost, which may not optimize response.

Agencies understand this limitation and are hard at work to solve it. A widely used simple fix is to introduce "this is what we think" weights for programs or dayparts based upon the judgment of the agency. But that process is rigged to produce familiar schedules -- which is not the purpose of optimization.

Lead agencies such as New York-based Carat, DDB Needham Worldwide, Grey Advertising, Ogilvy & Mather Worldwide and Chicago-based Leo Burnett Co.'s StarCom Media Services and T Media along with Western International Media, Los Angeles are spending money on research. They want to learn if and why some kinds of TV exposures produce greater consumer response -- and to quantify these differences to make their optimizers smarter.


The focus is on finding relationships between qualitative measures of program viewing and the viewer's ability to recall commercials carried in the program. Some of the dimensions being studied are program preferences, attention, frequency-of-viewing and minutes-viewed.

There are other approaches.

A group of major advertisers are working with ad-effectiveness researcher Michael von Gonten, to conduct a series of in-market tests of TV schedules to read exposure-value through sales.

There will be the advertising effectiveness study -- Adworks 2 -- by Information Resources Inc., Media Marketing Assessment and Nielsen Media Research.

If research shows value differences that can be generalized to some simple attributes of TV -- time of telecast, program type, rating-level -- this could provide an objective basis for value-weighting TV.

That will then take us closer to optimizing the Big Kahuna -- product sales. But for now, the possibilities of optimization are mired in the uncertainties about how the TV medium works. The freshman rush is over. The sophomore slump is here.

In this article: