During a time of rapid change, our understanding of media behavior is more important than ever, but it seems we are becoming less informed.
It wasn't always like this. Once upon a time, when baby boomers were teenagers and TV was the dominant medium, the media landscape became quite peaceful. As a result, only the trade press reported about media issues, and analyses of media-use trends were very rare.
Fast forward to the "90s. Media had become a "hot" topic as new digital-media technologies, especially the internet, were starting to impact media-use patterns. Consultancy companies specializing in media were founded and regular mainstream news coverage of the industry began. Not surprisingly, these developments generated a huge market for prognoses of future trends and "the next big thing." One prominent forecaster, Nicholas Negroponte, wrote in his book "Being Digital" that TV's likely fate would be that of a casualty on the "electronic super highway." The book became a bestseller.
One could compare media and economic forecasts of the past 20 years to see which sector proved worse at prognosticating. As we know now, almost no one actually did throw out the TV -- in fact millions replaced it with a more expensive HD set. And other predictions, such as that user-generated content would replace scripted content, have not come true, either.
I would like to focus on another concern: reports about current media behavior. One would assume statements about today's media usage would be more accurate than forecasts of future behavior. Nevertheless, studies and reports -- by technology consultancies, research companies and stories in the trade and popular press—are often riddled with errors and misleading conclusions. For example, a recent Wall Street Journal report stated: "YouTube's audience easily dwarfs the viewership for traditional TV networks" -- an erroneous conclusion that confuses monthly reach data with TV ratings (which are based on average minute reach and time spent).
The reasons for poor research and questionable reports are complex. As consumers are using more media, measuring all those media touchpoints accurately is difficult.
Currency data are no longer complete. There is missing or limited information, for example, on mobile video. It is almost impossible for respondents to provide precise data on their media behavior in surveys. There is pressure to simplify complex data.
Provocative headlines such as "Americans Are Quitting TV," designed to catch busy readers' attention, confound the problem. In many cases, headlines do not even reflect the content of the story and can be misleading if the readers skips the story itself. "Is Social Media Killing TV?" was the headline of an interesting piece on new research indicating that social media may actually cause an increase in TV viewing. And how many headlines have pronounced an epidemic of "cord-cutting" only to reveal a very small drop in subscriptions or survey data proving that respondents have considered canceling cable?
Most troubling are studies and reports that ignore fundamental research values such as sample limitations or statistical significance and interpret very small changes as major trends. Not only do we find confusion about the facts, but also what Bob Barocci recently called an "illusion of precision." Too often, data limitations are ignored in favor of declarations of new, preferably "stunning" insights.
Here is the most salient issue -- misinformation hurts business. We need better data collection from those inside the industry -- an issue we're already addressing. And just as important is a more reasoned and responsible approach to reporting that data in the media. No one should be making business decisions based on stories and headlines designed less to reflect reality and more for shock value.