As Election Day has drawn closer, opinion polls have taken up ever more of the news hole. Which of the dozens of polls that cross journalists’ desks are reliable, and which should be ignored?
In 2016, the expectation of a Clinton victory, derived from polls, led to a flurry of finger pointing. “How did everyone get it so wrong?” blared a Politico headline. Some of the final pre-election polls were far off the mark. Most of these were state-level polls. The national polls, on the other hand, were relatively accurate. Most of them had Hillary Clinton ahead by a couple of percentage points, which was roughly her popular-vote margin.
The best national polls — the Wall Street Journal/NBC News poll being an example — have a remarkable track record in estimating the final presidential vote. One reason is that they have high methodological standards and rely on live interviewers rather than automated callers. As well, in their final poll and sometimes earlier ones, they sample a large number of respondents. Sampling error — the degree to which a sample is likely to represent what the electorate as a whole is thinking — is a function of sample size. Everything else being equal, the larger the sample, the smaller the sampling error. Additionally, leading national polls have data from past elections that has enabled them after the fact to test alternative estimation models in order to discover which ones yield the most precise predictions. How much weight, for example, should be placed on respondents’ stated vote intention relative to the strength of their party identification? It’s the weighted results that journalists receive, which leaves them somewhat at the mercy of pollsters. Pollsters do not routinely disclose the weights they have used in translating a poll’s raw data into the results that are made public.
National polls also serve as checks on each another. Through the 1970s, national polls were few in number. Since the 1990s, more than 200 such polls have been conducted during the presidential general election. It’s easy to identify an outlier — a poll with findings that are markedly at odds with those of other polls. The Rasmussen poll, for instance, is often an outlier, sometimes by a large amount, typically in the Republican direction. The proliferations of polls has also allowed estimates derived from aggregating the polls — a methodology applied, for example, by FiveThirtyEight’s Nate Silver.
Silver has developed a grading system for high-frequency polling organizations by comparing their poll results to actual election results. The highest graded national polls tend to be those that are university-based, such as the Monmouth Poll. Polls that rely on live interviewers also tend to get high grades. Those that rely on automated questioning tend to be less accurate. The lowest-graded national polls tend to be online-only polls like Survey Monkey and Google Consumer Surveys. Although the accuracy of online polling has increased over time as estimation models have improved, they’re still markedly less accurate on average than polls that employ more traditional methods.
In general, state-level polls tend to have weaker track records than national polls. Many of them are conducted by organizations that don’t poll regularly and lack sophisticated models for weighting the results. Some of the state-level polls during the 2016 election, for example, failed even to correct for the fact that their samples included a disproportionately high number of college-educated respondents. Budgetary restraints also affect many state-wide polls. If they rely on live interviewers, they tend to use relatively small samples to reduce the cost. Some of them use less-reliable automated calling methods in order to survey a larger number of respondents. Then, too, relatively few polls are conducted in most states, which reduces the possibility of judging a poll by comparing its results to those of other polls. Moreover, when there are comparators, there’s a question of whether they are reliable enough to serve that purpose. Many of the newer state polls are low-budget affairs made possible by such developments as robocalls and online surveying.
There are some solid state-wide polls, including the multi-state polls conducted by the New York Times in collaboration with Sienna College. These polls are methodologically rigorous but otherwise reflect the tradeoffs characteristic of most state polls. Compared with the New York Times/Sienna College national polls, the state-level polls typically sample only half as many respondents — meaning that the state polls have a larger sampling error.
In 2016, the national popular vote was within the range that allowed for the possibility that the popular vote winner would lose the electoral college vote. The larger the national margin, the smaller the likelihood of such an outcome. As the campaign unfolds over its final weeks, journalists will need to take that statistical tendency into account, just as they’ll need to consider the profiles, methods, and track records of the polls they cite.
Thomas E. Patterson is Bradlee Professor of Government & the Press at Harvard’s Kennedy School and author of the recently published Is the Republican Party Destroying Itself? Journalist’s Resource plans to post a new installment of his Election Beat 2020 series every week leading up to the 2020 U.S. election. Patterson can be contacted at thomas_patterson@harvard.edu.
Further reading:
“FiveThirtyEight’s Pollster Ratings,” FiveThirtyEight, May 19, 2020.
Will Jennings, Michael Lewis-Beck, and Christopher Wlezien, “Election forecasting: Too far out?” International Journal of Forecasting 36, 2020.
Costas Panagopoulos, Kyle Endres, and Aaron C. Weinschenk, “Preelection poll accuracy and bias in the 2016 U.S. general elections,” Journal of Elections, Public Opinion and Parties 28, 2018.
Thomas E. Patterson, “Of Polls, Mountains: U.S. Journalists and Their Use of Election Surveys,”
Public Opinion Quarterly 69, 2005.
Expert Commentary