Why Pollsters Got the Election So Wrong, and What It Means for Marketers

By Published on .

2016 presidential candidates Hillary Clinton and Donald Trump.
2016 presidential candidates Hillary Clinton and Donald Trump. Credit: Photos by Gage Skidmore via Creative Commons
Most Popular

How did so many pollsters get the presidential election so wrong? The answer may involve shame, some of which belongs to research organizations themselves.

Research firms participate in the high-profile polling to showcase their services for brand marketers. Amid Donald Trump's huge upset, those firms now have been forced into what could be a lengthy navel-gazing exercise over why their methodologies failed.

"We're doing a range of things at SurveyMonkey, and I know my colleagues across the industry are doing the same things, to try to figure out are we having a small problem or is it something bigger?" said Chief Research Officer Jon Cohen. "We don't yet know whether this is the cataclysmic polling failure we've all anticipated for a decade or more or it's something more run of the mill."

The other part of the shame belonged to Trump voters, many of them unwilling to admit, particularly to live human beings on the other end of the phone, their plans to vote for the president-elect.

That was an effect that Trafalgar Group, a small Atlanta-based Republican-affiliated polling firm, began noticing during the Republican primaries. So it developed a system to counteract the effect. Trafalgar started asking voters not only who they planned to vote for, but also who they thought their neighbors would vote for. The latter percentage consistently came out higher number than the former, said Robert Cahaly, senior strategist.

"On a live poll, the deviation was that Trump was understated probably 6%-7%, and on an automatic poll it was probably understated 3%-4%," Mr. Cahaly said.

Using its adjusted numbers, Trafalgar predicted upsets for Mr. Trump in Pennsylvania and Michigan. And the firm's 306-232 prediction in the overall Electoral College vote may well end up matching the final total. The methodology got a "C" from Nate Silver's FiveThirtyEight.com, but it ended up being closer than FiveThirtyEight's system of consolidating results from a large number of polls that mostly were skewed wrongly toward Hillary Clinton.

Trafalgar did have Mr. Trump winning Nevada and New Hampshire, which he apparently didn't, but had him losing Wisconsin rather than winning as he did. It all evened out. "In Wisconsin, we didn't even poll, but made a prediction that it was close based on knowing there was a deviation" elsewhere, Mr. Cahaly said.

But Raghavan Mayur, president of TechnoMetrica Market Intelligence, which handles the Investor Business Daily/Tipp poll that has been the closest or among the closest to the actual popular vote in the past three elections, isn't buying the "silent voter" theory. His final poll had Ms. Clinton ahead by 1% in the popular vote in a head-to-head contest, which appears to be closest to actual results among national polls. Overall, polls as consolidated by FiveThirtyEight averaged a 3.6% Clinton lead.

Mr. Mayur has another explanation for why other polls were wrong, one that has more direct implications for marketers -- poor-quality data.

"The best analysis in the world will not compensate for bad quality data," said Mr. Mayur, who described himself as an independent who was indifferent to whether Mr. Trump or Ms. Clinton won.

The quality issue has a direct bearing on how problems in election polling can be reflected in brand research, Mr. Mayur said. "Everybody wants the right research. But everybody wants cheap research. And you get what you pay for." He said TechnoMetrica does only live voter surveys, two thirds on cell phones and one third landlines, with stringent quality control for its presidential poll.

What really drove the surprise in other polls is that they didn't capture the enthusiasm gap between Republican and Democratic voters that he saw begin to emerge Sept. 1, he said. Mr. Mayur captures this by asking respondents to rate their interest in the election on a seven-point scale. Republicans were considerably more interested this time than four years ago, he said. As a result, they turned out at the same 37% rate on election day as Democrats, he said, despite their eight-percentage-point registration disadvantage. Combined with Mr. Trump winning independents by around eight percentage points, that was enough to swing the election.

But others, particularly among players that erred toward calling a big win for Ms. Clinton, cited other factors that made this race particularly hard to call.

Ipsos, which had a 5-point lead for Ms. Clinton in its final poll, predicated that based on a projected 60% turnout. But early indications suggest only 52% of eligible voters cast ballots. Ipsos is in the midst of a more thorough review, however, to find what went wrong, said Pierre Le Manh, CEO-North America.

Realistically, the 3%-4% win for Ms. Clinton predicted by most polls was within the margin of error for polls and provided a degree of accuracy acceptable for many marketing purposes. "Here, it's a very big deal, because it did create a situation where everybody was expecting her to win, and Trump won," Mr. Le Manh said.

Trump flummoxed estimates of likely voters in part, he said, "but bringing people to the polls who normally don't vote," he said.

It's worth noting that some organizations that have tracked the presidential horse race in the past -- including Gallup and Pew, opted not to this year. Growing uncertainty about results may make such efforts more likely to cause embarrassment than showcase capabilities.

Election polling is in many ways harder than other market research, because it involves predicting a future population of people who will vote rather than monitoring past behavior or current intentions, said SurveyMonkey's Mr. Cohen, whose firm was even further off than Ipsos on its final popular vote estimate, putting Ms. Clinton up 6%.

If there was a saving grace, though, it was that SurveyMonkey's online polls showed Mr. Trump doing much better than most others -- though still not winning -- in the crucial battleground states of Michigan and Wisconsin.

Realistically, polls averaged a three-point error four years ago on the popular vote in Mr. Obama's favor too, but it didn't matter as much, because he still won, Mr. Cohen said.

"We can't ignore the fact that this is four more years down the road when more and more people don't have landlines," he said. "It's harder and harder to get sample, particularly at the state level."

That made having an online survey perhaps more desirable this year, and may have helped mitigate any "silent voter" effect against Trump for SurveyMonkey, which may have helped it get closer to right than others in some of the swing states.

"I take a small amount of solace in that," Mr. Cohen said. "I would like to be more right than we were."

Political polling may be more closely watched and higher profile, but in many ways it needs to catch up with brand market research, said Simon Chadwick, founding and managing partner of Cambiar, a consulting firm for market-research agencies and their investors.

"What's happening increasingly in marketing is that survey research is being used to complement other forms of data," he said, be it transactional data, social-media listening, ethnography or neuroscience. "People increasingly are synthesizing those other forms of data," he said, "but in politics it doesn't seem to have happened."

Jack Hollis, group VP for marketing at Toyota Motor Sales, USA, says the automaker is careful to balance quantitative data with qualitative data. "From a marketer's standpoint when you look at data you can only take it as somewhat directional or somewhat informative. You have to be able to in your gut and in your heart know what you are trying to accomplish," he said.

"We don't use the same kind of polled data that you see in the political arena. We are very much more about personal one-on-one relationships through the dealers giving us direct feedback from the guests about why they accept our cars or why they reject our cars," he said. "We have focus groups on every single car that we bring out ... and those are very much first hand, directly, one-on-one tell me about this. It's experiential, versus a hidden data source where it's a poll 'hit a button' type of deal."

Mr. Chadwick has been warning for years about the growing issues of survey research caused by ever more reluctant respondents, particularly in phone research, something that he believes may have helped hide some of that "silent" Trump vote but also causes growing issues for brand research.

"Particularly people who are feeling disenfranchised, angry, they're not going to pick up the phone for a poll, and even if they do may not want to tell a stranger who they're going to vote for," Mr. Chadwick said.

Trafalgar's addition of the "neighbor" screen to suss out more honest answers is an approach rooted in behavioral economics of the sort that marketers, particularly those doing work on embarrassing subjects, are increasingly using to compensate for such bias.

"It seems very intelligent," he said. "And maybe mainstream pollsters would do well to take it into account."

Contributing: E.J. Schultz