Expert Commentary

4 tips for understanding statistics from FiveThirtyEight’s Christie Aschwanden

Tips for numbers-shy journalists on understanding and writing about statistics.

Charts
(Pixabay)

Reporting on academic research can push journalists out of the comfortable realm of words and into the less-familiar territory of numbers. Like it or not, though, statistics often comprise a key element of scientific studies, so it’s helpful to have some familiarity with the basics.

But that doesn’t mean it’s time to go back to school. “The onus shouldn’t fall completely on journalists to do statistical analyses or say whether they’re correct,” said Christie Aschwanden, FiveThirtyEight’s lead writer for science. Here are her top tips for journalists reporting on statistics:

  1. Seek outside validation of statistical analysis.
    • Aschwanden recommended asking a trained statistician whether the paper uses the appropriate statistical analyses. The American Statistical Association has a list of members willing to offer their expertise.
    • Journalists should also consider posing the same questions to the researchers involved with the papers.
    • The certainty and uncertainty a researcher has in his or her results presents another critical area to probe. Journalists can ask researchers what they are certain of, the degree of certainty they have with their results and how they have established this certainty.
    • “But another really important question to ask is what aren’t you sure of and how could you be wrong,” Aschwanden said. “Because if someone’s not willing to address that, or they say, ‘Oh, it can’t be wrong, we know this is right,’ I’d be pretty skeptical of them.” She suggested this attitude stands counter to the scientific method, which presents results as intermediate and susceptible to past and future findings.
  2. P-values are an indication of whether a study’s findings are statistically significant, but they aren’t the be-all and end-all.
    • For journalists who would like to dip their feet into the pool of statistics, one important concept is the p-value. Though seemingly ineffable, p-values quantify the consistency of the collected data as compared to a default model which assumes no relationship between the variables in question. If the data is consistent with the default model, a high p-value will result. A low p-value indicates that the data provides strong evidence against the default model. In this case, researchers can accept their alternative, or experimental, model. A result with a p-value of less than .05 often triggers this.
    • Lower p-values can indicate statistically significant results, but they don’t provide definitive proof of the veracity of a finding. P-values are not infallible — they cannot indicate whether seemingly statistically significant findings are actually the result of chance or systematic errors. Further, the results of a study with a low p-value might not be generalizable to a broader population.
    • P-values are also easily manipulable. A practice called p-hacking involves cherry-picking data to produce a statistically significant result.
    • “If they did a study and scattershot collected a bunch of data and then sort of picked through them to find something interesting, that is a less sound way of doing things than to set out in advance and lay out exactly what you’re going to measure, which things you’re going to analyze in your statistical analysis,” Aschwanden explained.
  3. Read the methods section and seek clarification as necessary.
    • Aschwanden said, “The methods section can be really boring, it can be a little bit hard to follow, but what you really need to know is what did they do and how did they do it? Why do they think they know the thing that they’re claiming?”
    • In other words, it’s key to consider how researchers are measuring whatever they set out to examine, and whether this a valid measure.
    • If the methods are overly technical, Aschwanden again recommended seeking outside help. Close scrutiny of methods can help weed out questionable studies: “Results that end up being spurious or don’t replicate later, oftentimes they wouldn’t have passed the smell test to begin with,” she said.
  4. Give context when presenting statistics in a story.
    • When it comes to writing about statistics, Aschwanden reiterated that technical understanding is not crucial.
    • Rather, journalists should attempt to convey the level of certainty of the findings — “Does this study make us more or less sure of what we thought before, does it suggest something new? But also, what is the level of uncertainty?” Reporters, like researchers, should point out the uncertainties involved in a given study.
    • “We tend to gussy things up and make them look more exciting, or more interesting, or there’s a tendency to dismiss the uncertainty,” Aschwanden said. “I think we should actually be doing the opposite, which is disclosing the uncertainty, being cautious about it, and taking care not to hype things that are still pretty uncertain.”
    • Aschwanden recommended contextualizing findings by focusing on absolute, rather than relative numbers, because results that are stated in relative terms can be misleading. For example, a 50 percent reduction in rates for a particular type of cancer could mean a decrease from 100 cases to 50 or from 2 to 1.

While knowing how to navigate numbers can help in all of this, the truly math-phobic needn’t despair. “Some of it just comes down to having a good bullshit detector, too,” Aschwanden said. “They’re making an extraordinary claim, that’s going to require extraordinary evidence.”

About The Author