GUEST COLUMN: Web measurement needs quality data

By Published on .

In a recent guest column, Doug McFarland, senior VP-general manager of Media Metrix, wrote of the need for standards in Web measurement. I'd like to propose an alternative view.

Mr. McFarland suggests that lack of standards for certain definitional and technology-related issues is the root cause of differences among Web research suppliers.

I disagree. While there are certainly some legitimate issues of definition that remain in flux, I believe that there are more important differences among research suppliers. Those variations lie in the fundamentals of survey and panel research methodology, more than in the challenges of technology.

In fact, I would argue that there are standards--the principles of "best practice" in survey research--that should apply to Web research, but that are too often overlooked.

To yield quality research, the measurement company must provide:

  • A probability sample of representative respondents . . .

  • of sufficient size for the intended purpose . . .

  • who provide valid data about their behavior . . .

  • based on a prudent definition of our data "needs" . . .

  • to a research company with robust and transparent systems of quality control and data interpretation.

    Without those key components of quality in research, the results are suspect at best, and harmful at worst.

    I also respectfully disagree with Mr. McFarland's argument that "the appliance is key to measuring," referring to the appliance, such as the computer, used to access digital media.

    Fundamental research quality is the key to any form of media measurement. That means the quality of sampling, the quality of data collection, and the quality of a supplier's systems.

    Mr. McFarland does raise some interesting points about the completeness of measurement--about the desirability of measuring all computer usage vs. simply Web usage. I'll have to leave it to advertisers to put a current and future value on research concerning proprietary-system computer usage (such as America Online, PointCast and other digital media), as described by Mr. McFarland.

    Such additional data probably comes at a cost to research quality. If respondents and their employers know that all computer usage is being monitored, it will inevitably put further strain on their willingness to cooperate. The fact that we can collect additional data doesn't always mean that we should.

    I conclude by asking the users of Web audience data to ask all of the questions to all suppliers. Ask about sampling methods. Ask about response rates (properly computed, which presumes a probability sample).

    Ask about sample composition, and sample sizes, and the validity of data collection, and the systems for processing data. And insist on full disclosure of all the components of quality.

    Those standards exist. Let's use them.

    Mr. Peacock is president of Peacock Research, a research consultancy in Laurel, Md. His clients include RelevantKnowledge and the Media Rating Council.

    Copyright May 1998, Crain Communications Inc.

  • Most Popular
    In this article: