Expert Commentary

Polling fundamentals and concepts: An overview for journalists

Basic polling concepts for journalists, including how polls are conducted, polling organizations, and things to watch out for when reporting on polling results.

(cancer.gov)

The 2016 presidential election surprised many because Donald Trump’s win defied the vast majority of polls. In the aftermath, some are blaming journalists for rushing information out quickly without explaining basic polling caveats. Despite all the lavish attention, polls are only as valid as their design, execution and analysis.

The best polls are produced by independent, nonpartisan polling organizations, with no vested interest in the outcome of the findings. These include organizations like Gallup and the Pew Research Center and as well as media groups such as CBS News/New York Times, ABC News/Washington Post and NBC News/Wall Street Journal. Many surveys are conducted by partisan actors — political consulting firms, industry groups and candidates. In some cases, the findings are biased by factors such as respondent selection and question wording. Partisan-based polls need to be carefully scrutinized and, when possible, reported in comparison with nonpartisan poll results.

It’s important to remember that polls are a snapshot of opinion at a point in time. Despite 60 years of experience since Truman defied the polls and defeated Dewey in the 1948 presidential election, pollsters can still miss big: In the 2008 Democratic primary in New Hampshire, Barack Obama was pegged to win, but Hillary Clinton came out on top. A study in Public Opinion Quarterly found that “polling problems in New Hampshire in 2008 were not the exception, but the rule.” In a fluid political environment, it is risky to assume that polls can predict the distribution of opinion even a short time later.

Here are some polling concepts that journalists and students should be familiar with:

  • In a public opinion poll, relatively few individuals — the sample — are interviewed to estimate the opinions of a larger population. The mathematical laws of probability dictate that if a sufficient number of individuals are chosen truly at random, their views will tend to be representative.
  • A key for any poll is the sample size: a general rule is that the larger the sample, the smaller the sampling error. A properly drawn sample of one thousand individuals has a sampling error of about plus or minus 3%, which means that the proportions of the various opinions expressed by the people in the sample are likely to be within plus or minus 3% of those of the whole population.
  • In all scientific polls, respondents are chosen at random. Surveys with self-selected respondents — for example, people interviewed on the street or who just happen to participate in a web-based survey — are intrinsically unscientific.
  • The form, wording and order of questions can significantly affect poll results. With some complex issues — the early debate over human embryonic stem cells, for example — pollsters have erroneously measured “nonopinions” or “nonattitudes,” as respondents had not thought through the issue and voiced an opinion only because a polling organization contacted them. Poll results in this case fluctuated wildly depending on the wording of the question.
  • Generic ballot questions test the mood of voters prior to the election. Rather than mentioning candidates’ names, they ask the respondent would vote for a Republican or Democrat if the election were held that day. While such questions can give a sense of where things stand overall, they miss how respondents feel about specific candidates and issues.
  • Poll questions can be asked face-to-face or by telephone, with automated calls, or by email or mail. The rise of mobile-only households has complicated polling efforts, as has the increasing reluctance of Americans to participate in telephone polls. Nevertheless, telephone polls have a better record of accuracy than Internet-based polls. Whatever the technique used, it is important to understand how a poll was conducted and to be careful about reporting any poll that seems to have employed a questionable methodology.
  • Social desirability bias occurs when respondents provide answers they think are socially acceptable rather than their true opinions. Such bias often occurs with questions on difficult issues such as abortion, race, sexual orientation and religion.
  • Beware of push polls, which are thinly disguised attempts by partisan organizations to influence voters’ opinions rather than measure them.
  • Some survey results that get reported are based on a “poll of polls,” where multiple polls are averaged together. Prominent sites that engage in this practice are FiveThirtyEight, Real Clear Politics and the Cook Political Report. There are, however, any number of methodological arguments over how to do this accurately and some statisticians have objections to mixing polls at all.
  • When reporting on public-opinion surveys, include information on how they were conducted — who was polled, when and how. Report the sample size, margin of error, the organizations that commissioned and executed the poll, and whether they have any ideological biases. Avoid polling jargon, and report the findings in as clear a language as possible.
  • Compare and contrast multiple polls when appropriate. If the same question was asked at two different points in time, what changed? If two simultaneously conducted polls give different results, find out why. Talk to unbiased polling professionals or scholars to provide insight. If you’re having trouble finding experts to put findings in perspective, exercise caution.
  • When polls appear in news stories, they’re typically emphasize the “horse race” aspects of politics. This focus can obscure poll findings that are of equal or greater significance, such as how voters feel about the issues and how their candidate preferences are affected by the issues.

For those interested in a deeper dive into polling, Journalist’s Resource has a number of academic studies on measuring public opinion: “I’m Not Voting for Her: Polling Discrepancies and Female Candidates,” “Measuring Americans’ Concerns about Climate Change,” “Dynamic Public Opinion: Communication Effects over Time” and “Exit Polls: Better or Worse Since the 2000 Election?” are just a few of those available.

____

This article is based on work by Thomas Patterson, Harvard’s Bradlee Professor of Government and the Press and research director of Journalist’s Resource; Charlotte Grimes, Knight Chair in Political Reporting at Syracuse University; and the Roper Center for Public Opinion Research at the University of Connecticut.

Keywords: polling, elections

About The Author