When ratings fluctuate unexpectedly, the advertising community creates stories to fit the fluctuations. "It's an unusually warm (or cold) fall." "People are spending more (or less) time outdoors." "The programs are good (or no good)." "Sex is selling (or not selling)." Ad nauseam. My response: "Nonsense."
History teaches that viewing patterns are more stable than the measurement. Media audiences move like glaciers. Abrupt changes in total usage or in key demographics do not happen. Effectively, programming doesn't matter in the short term. The masses decide to watch TV, and then choose what to watch.
But observations equal truth plus error. When Nielsen or Arbitron or MRI or any research company sticks a screwdriver into the system, different errors are introduced. There is a lot going on at Nielsen. Changes in weighting procedures and in field assignments. The Hispanic sample is being integrated and local people meters rolled out. Are tabulation rates affected? Is the sample turnover schedule affected?
Our company, Statistical Research (now Knowledge Networks/SRI) tracked Nielsen data for the industry for 30 years, and in 1988, SRI completed an industry-sponsored review of the Nielsen people-meter operations. A stronger focus on defined procedures and quality control was one key recommendation. That means carefully monitoring procedures and making changes only after testing. Without well-designed quality control, extraneous variation is guaranteed.
Our reviews revealed much to explain quirks in the Nielsen data. Sensors that determine on/off status were found on living room floors; sets that were on were miscounted as off. When Nielsen revamped field force assignments to reduce costs, many heaviest-viewing homes were left out of the sample. Changes in Nielsen coaching procedures led to audience variations.
Given the history, my conviction is that Nielsen is the problem. And when it is, all audiences are not affected equally. When sensors fell off, it happened on the most-used sets; the greatest relative losses were in daytime and homes with children. We also know from earlier analyses that visitors and 18-to-34-year-olds are most prone to panel fatigue; that is, the longer a home has been in the sample, the lower the viewing levels reported. If Nielsen has done anything to affect the average tenure of sample homes, then this is a logical contributor to the current controversy.
Nielsen is also not able to measure new delivery systems, as well as some old ones, e.g. VCR playback. With the expansion of digital cable boxes, PVRs and video-on-demand, the importance of this deficiency will grow.
Nielsen must be the solution. Uncertainties about its data and systems can be eliminated if the doors and windows to its operations are thrown open to all. The Media Rating Council audit of Nielsen operations should be an open process, with reports available to clients. A commitment to such client involvement was a second recommendation from the 1988 review.
Rather than having quality control reports readily available, the information seemingly has to be pulled out of Nielsen bit by bit. Given its business position, Nielsen has nothing to fear and everything to gain from a totally open architecture. Nielsen should have current and complete quality control reports available on its Web site.
In addition to internal checks, the only other way to evaluate a database is through external comparisons. Hence, the third 1988 recommendation was a commitment to methodological research. Creating independent sources is a difficult business challenge. Clients are not quick to provide resources for independent measures.
To change that equation, advertisers must become more involved. Return-on-investment from better measurement methods will redound to their benefit. They must push their colleagues to create better systems and more crosschecks. The clients really own the media measurement and must be assertive in acting on that ownership. As things stand, there is an unnecessary mystery around those estimates of consumer connections in which they invest billions of dollars.
Today, history supports the "Nielsen-did-it" assumption as the cause of changes. If Nielsen and the industry work together, the resulting dialogue and criticism will bring improvements that will enable all to prosper. And the next time audiences change, the industry will know why and how-if at all-that should affect their business decisions.
In the meantime, enjoy the stories as stories.
Gale Metzger is senior consultant, Knowledge Networks/SRI, Cranford, N.J., and cofounder of Statistical Research Inc. (SRI), which developed and tested an alternative TV rating system, SMART, which was supported by 33 advertisers, agencies and telecasters.