:::: MENU ::::

Ads, Public Opinion, Reporting, Research, Writing

Polling fundamentals and concepts: An overview for journalists

Tags: | Last updated: November 10, 2015

Last updated: November 10, 2015

Polling fundamentals (Keith Bishop)Polling fundamentals (Keith Bishop)Polling fundamentals (Keith Bishop)
(Keith Bishop)

As the calendar turns toward Election 2016, a thickening storm of pre-election polls already has begun, covering every possible angle of contests that remain almost a year away. Despite all the lavish attention, however, polls are only as valid as their design, execution and analysis.

The best polls are produced by independent, nonpartisan polling organizations, with no vested interest in the outcome of the findings. These include organizations like Gallup and the Pew Research Center and as well as media groups such as CBS News/New York Times, ABC News/Washington Post and NBC News/Wall Street Journal. Many surveys are conducted by partisan actors — political consulting firms, industry groups and candidates. In some cases, the findings are biased by factors such as respondent selection and question wording. Partisan-based polls need to be carefully scrutinized and, when possible, reported in comparison with nonpartisan poll results.

It’s important to remember that polls are a snapshot of opinion at a point in time. Despite 60 years of experience since Truman defied the polls and defeated Dewey in the 1948 presidential election, pollsters can still miss big: In the 2008 Democratic primary in New Hampshire, Barack Obama was pegged to win, but Hillary Clinton came out on top. A study in Public Opinion Quarterly found that “polling problems in New Hampshire in 2008 were not the exception, but the rule.” In a fluid political environment, it is risky to assume that polls can predict the distribution of opinion even a short time later.

Here are some polling concepts that journalists and students should be familiar with:

  • In a public opinion poll, relatively few individuals — the sample — are interviewed to estimate the opinions of a larger population. The mathematical laws of probability dictate that if a sufficient number of individuals are chosen truly at random, their views will tend to be representative.
  • A key for any poll is the sample size: a general rule is that the larger the sample, the smaller the sampling error. A properly drawn sample of one thousand individuals has a sampling error of about plus or minus 3%, which means that the proportions of the various opinions expressed by the people in the sample are likely to be within plus or minus 3% of those of the whole population.
  • In all scientific polls, respondents are chosen at random. Surveys with self-selected respondents — for example, people interviewed on the street or who just happen to participate in a web-based survey — are intrinsically unscientific.
  • The form, wording and order of questions can significantly affect poll results. With some complex issues — the early debate over human embryonic stem cells, for example — pollsters have erroneously measured “nonopinions” or “nonattitudes,” as respondents had not thought through the issue and voiced an opinion only because a polling organization contacted them. Poll results in this case fluctuated wildly depending on the wording of the question.
  • Generic ballot questions test the mood of voters prior to the election. Rather than mentioning candidates’ names, they ask the respondent would vote for a Republican or Democrat if the election were held that day. While such questions can give a sense of where things stand overall, they miss how respondents feel about specific candidates and issues.
  • Poll questions can be asked face-to-face or by telephone, with automated calls, or by email or mail. The rise of mobile-only households has complicated polling efforts, as has the increasing reluctance of Americans to participate in telephone polls. Nevertheless, telephone polls have a better record of accuracy than Internet-based polls. Whatever the technique used, it is important to understand how a poll was conducted and to be careful about reporting any poll that seems to have employed a questionable methodology.
  • Social desirability bias occurs when respondents provide answers they think are socially acceptable rather than their true opinions. Such bias often occurs with questions on difficult issues such as abortion, race, sexual orientation and religion.
  • Beware of push polls, which are thinly disguised attempts by partisan organizations to influence voters’ opinions rather than measure them.
  • Some survey results that get reported are based on a “poll of polls,” where multiple polls are averaged together. Prominent sites that engage in this practice are FiveThirtyEight, Real Clear Politics and the Cook Political Report. There are, however, any number of methodological arguments over how to do this accurately and some statisticians have objections to mixing polls at all.
  • When reporting on public-opinion surveys, include information on how they were conducted — who was polled, when and how. Report the sample size, margin of error, the organizations that commissioned and executed the poll, and whether they have any ideological biases. Avoid polling jargon, and report the findings in as clear a language as possible.
  • Compare and contrast multiple polls when appropriate. If the same question was asked at two different points in time, what changed? If two simultaneously conducted polls give different results, find out why. Talk to unbiased polling professionals or scholars to provide insight. If you’re having trouble finding experts to put findings in perspective, exercise caution.
  • When polls appear in news stories, they’re typically emphasize the “horse race” aspects of politics. This focus can obscure poll findings that are of equal or greater significance, such as how voters feel about the issues and how their candidate preferences are affected by the issues.

For those interested in a deeper dive into polling, Journalist’s Resource has a number of academic studies on measuring public opinion: “I’m Not Voting for Her: Polling Discrepancies and Female Candidates,” “Measuring Americans’ Concerns about Climate Change,” “Dynamic Public Opinion: Communication Effects over Time” and “Exit Polls: Better or Worse Since the 2000 Election?” are just a few of those available.


This article is based on work by Thomas Patterson, Harvard’s Bradlee Professor of Government and the Press and research director of Journalist’s Resource; Charlotte Grimes, Knight Chair in Political Reporting at Syracuse University; and the Roper Center for Public Opinion Research at the University of Connecticut.

Keywords: polling, elections

    Writer: | November 10, 2015


    You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

    Mark A. Larson Aug 7, 2012 11:03

    Having been teaching this same material to undergrad journalism majors for 30 years, I’d encourage journalists to also turn to the AP Style Guide for insights on how to report on a poll (whether received from a credible polling firm or otherwise).

    It’s disappointing to see how many poll news stories and their headlines fail to measure up to basic reporting standards — the most common is failure to account for the margin of error percentage (assuming that the poll used a random or probability sample).

    John Wihbey Aug 7, 2012 13:26

    Mark – Sounds as if you speak from experience! This is great feedback — many thanks for the suggestion regarding AP’s Style Guide. We’ll look to point to that, as well, in an updated version of this article. Regards, John Wihbey

    Making sense of week’s wild poll shifts | Tom Barrett May 3, 2013 14:24

    […] The Shorenstein Center at Harvard: Polling fundamentals and concepts: An overview for journalists […]

    Barry Hollander Aug 5, 2015 13:43

    This needs more on margin of error, statistical ties, and especially when journalists make the mistake of taking an entire sample’s MOE and applying it to subgroups, like blacks or Hispanics or only Democrats. One of the more common journalistic mistakes when it comes to polls.

    John Wihbey Aug 6, 2015 13:40

    Thanks, Barry! Want to propose a new bullet point to cover that? We’d sure appreciate it. -John (editor)

    Leave a comment

    Please prove you’re not a robot *