A "master plan" to improve print research had evolved from the Advertising Research Foundation Symposium held last year. It had three terrific ideas:
To reduce respondent burden and deal with the issues of sample size, explore the idea of dividing the data-gathering among several studies with the ability to integrate their results.
To sharpen the readership measurement, explore the idea of a three-prong effort: Work to improve the current methods; develop and test new techniques; and develop and test mixed-measure data fusion.
To help develop a state-of-the-art readership research system for now and the future, create a funded print research initiative similar to the Smart TV model, including a research and development laboratory.
Well, talk is cheap, especially at symposia. When the focus turned to action, nothing happened because of conflicting interests. This, as I recall, was the roll call:
Agencies and advertisers-the group least directly affected-were the most aggressive about the need for action. They pretty much voted "Yes. Let's do it."
Publishers were wary that any organized call for research improvement might further undermine confidence in current data and-in a perverse double jeopardy-cost them more money. They mostly said, "Not interested."
Research suppliers, settling for the devils they knew, solidly voted "No way."
Much of the debate was a study in denial-private reality hidden by public pretense. The reality is we ask too much of our surveys and our respondents. The pretense is we talk about other things.
Readership of 230 magazines cannot be accurately determined in a single interview. The list of magazines is too long. The recall period (past week, past month) is too long. The reading event (read or looked-into, any time, any place) is too inconsequential.
We are looking at title recognition and honest confusion reported as readership. There is abundant proof this technique does not work for TV.
As the ARF group pointed out, reducing respondent burden is key to improving the measurement.
The interview can be tedious. The respondent "screens in" from a deck of more than 230 magazine logos, then answers readership questions for each title screened in, answers questions on other media (including TV) and a battery of questions about demographics. The average interview runs a mind-numbing 60 minutes. But averages are misleading. Twenty percent of adults do more than 50% of the reading. That means they screen in many more magazines, which results in an even longer, more repetitive interview.
Most of our magazine numbers are generated by these heavy-reader interviews. Common sense says interviews covering that many magazines (and other stuff) are overloaded. But no one really wants to confront the problem.
Each of us should be forced to take a magazine research interview and fill out a product questionnaire before we use the data to spend a client's money.
Too busy? So are the people in the sample. If we did, perhaps, we'd take a fresh look at the priorities that make us push for bad data-as long as there's lots of it and the price is right.
There are a lot of good things going on in magazine research-most of them, blessedly, focused on making print more useful to advertisers.
There is issue-reader accumulation (from Mediamark Research), to encourage Target Rating Pointplanning and effective weight levels. Sales tracking (from the Magazine Publishers of America) to show how print works in the marketplace. Database research (From Audits & Surveys and Conde Nast Publications) to provide more useful information for smaller-circulation magazines.
The missing ingredient is research and development on the core research problems. This requires a dedicated, focused, well-financed agent-a magazine "Smart" with a real print lab as its tool.
The ARF, to its credit, has gone ahead and organized a print lab of its own, which has been hard at work.
But it is a spare-time activity, with no funding and no professional research staff. To my mind, that's hopeless.
The notion that well-intentioned people can solve the many problems dogging readership studies in their spare time is naive and counterproductive.
It is comfort without cure.
A REALISTIC R&D EFFORT
Print needs an independently funded R&D research initiative-managed by a first-rate research company. This is a realistic approach to improving the data we use to buy magazines. Smart did it for TV. The same idea can work for print.
Magazine research could be at a crossroads if we were willing to go in the same direction-and pay the toll.
Mr. Ephron is a partner at Ephron, Papazian & Ephron, a New York-based consultancy. (email@example.com)