By their very nature, academic studies are intended to increase knowledge about a particular question: Is there a relationship between voter turnout and unemployment, and if so, what? How will global biodiversity be affected by human population growth? Health-related studies are of particular importance, as the studies can answer life-and-death questions: For example, are CT scans more effective than chest X-rays in detecting early-stage lung cancer among heavy smokers?
Properly designed and rigorously executed, studies should give clear, statistically significant results, even when disproving something. The irony is that, while academic research is intended to clarify important questions, the results are often “spun” — intentionally or unintentionally exaggerated — by the press releases and media.
A 2012 study published in PLoS Medicine, “Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage,” looked at 70 health-related experimental studies and selected 41 for closer examination. The full studies, their associated press releases and subsequent media coverage were analyzed to understand how accurately the research results were conveyed and whether “spin,” intentional or otherwise, took place. Finally, researchers independently interpreted the results of randomized control trials based on reading the press releases and the journal articles.
The results include:
- Nearly half the studies’ press releases and the conclusions in published abstracts contained “spin” of some sort.
- “Spin” in studies’ press releases was associated with similar distortion in article abstracts. “Of the news items related to press releases, half contained ‘spin,’ usually of the same type as identified in the press release and article abstract.”
- When researchers read only the press release describing a study’s findings, they overestimated the benefits of the treatment described 27% of the time compared to those reading the full-text peer-reviewed article.
- When reading only a news item related to the study, researchers overestimated the experimental treatment’s benefit in 24% of cases compared to those reading to the full-text, peer-reviewed article.
- Factors associated with the overestimation of treatment benefits included the study’s publication in a specialized journal and its press release having “spin.”
“In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials … are sometimes distorted in peer-reviewed journals,” the authors write. “For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment.”
The authors highlight the crucial role that editors and reviewers at academic journals play in disseminating research findings: “These individuals … have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.”
A related 2012 study in PLoS One, “Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder,” examined both scientific publications and media reports and looked at the lack of follow-up on alleged scientific breakthroughs: “Because newspapers preferentially echo initial ADHD findings appearing in prominent journals, they report on uncertain findings that are often refuted or attenuated by subsequent studies. If this media reporting bias generalizes to health sciences, it represents a major cause of distortion in health science communication.”
Tags: science, training