This is not a description of audience measurement in the current fragmented and swiftly evolving media environment, but rather a statement heard 40-plus years ago during congressional hearings on "The Methodology, Accuracy and Use of Ratings in Broadcasting."
How was the situation resolved then? And are there lessons for the current challenges in media measurement?
In the early 1960s, some research companies were, in effect, making up the numbers. Several were not candid about their methods. Congressional staffers acquired portions of an internal memorandum directed to Arthur Nielsen that identified "vital weaknesses" in Nielsen procedures. Clients were incensed. The industry was shaken.
To avoid congressional oversight, representatives appeared before a congressional committee in 1963 and pledged to self-regulate. They agreed to establish an audit to assure that ratings services "said what they did and did what they said." The major networks committed to an ongoing program of methodological research to assess the accuracy and reliability of audience measurements.
Then, a 1970 study by Statistical Research Inc. reviewed local TV audience estimates. Standard demographic and product-usage ratings were included in such estimates, but after the product information was proven to be unreliable, Arbitron dropped it from its reports.
That research finding strengthened the conviction of the time that good research was good business. Accurate and reliable information facilitates a market; inaccurate estimates clog the selling and buying process.
New measures and new complications
In comparison with today, the TV world then was simple. In a fragmented market with many players and divergent views, it is virtually impossible to gain committee consensus on collective action.
Advertisers complain publicly about the lack of information on how media work in the marketing equation and how advertising dollars finance the media. However, there are answers to how advertising works. The ANA published a manual in 1961, Defining Advertising Goals for Measured Advertising Results (DAGMAR), that has had two editions and nine printings. It advocates the "application of management by objective to the field of advertising." Tracking what is spent and what is produced is a matter of corporate discipline. Smart, large advertisers have thousands of case studies. They know what works, what doesn't and where it works. Such work is not discussed publicly. The attitude seems to be "Why educate the competition?"
Following the DAGMAR method requires long-term planning. Some advertisers still seek the illusion offered by a "single-source" measurement. But there are no shortcuts to understanding and knowledge.
There is a free market, however, for claims of new and better measures of media audiences. And the new measures often come with claims of proprietary methods: "Trust us -- it is better!" If methods are better, they should be touted openly. Hiding behind proprietary claims occurred in the 1950s and that led to trouble.
In that decade, the Advertising Research Foundation was instrumental to the Bureau of Census providing TV-ownership estimates, which was a standard for guiding the TV rating services. The measurement was stopped in 1980 and SRI was asked to fill the gap. That former ownership study continues today, except it is vastly more complex. The study evolved into a home-technology inventory of TV, internet and telephone devices. What was once a relatively simple questionnaire now tracks more than 120 different forms of TV-related devices.
Keeping tabs on the playing field today is far more complicated. Claims of accuracy mean little. Measurement has not kept pace with the technology. What do we really know about the robustness of today's measurements? Not much.
Don't buy in
The need for research integrity, to say publicly what you do and do what you say, is everlasting. The need for more serious independent checking is clear. And if the methods are not disclosed or are too complicated to understand, do not buy in to the game.
Our industry learned some important lessons 40 years ago, when unreliable TV audience estimates raised the threat of congressional oversight. The internet situation today is scarily similar, yet our resolve to do something about it is not. Let us rededicate ourselves to the beliefs that accurate and reliable information facilitates markets, and inaccurate estimates clog the selling and buying process. Good research is good business.