Expert Commentary

Readers respond: Pro tips from scholars for journalists (and vice versa)

Among the main takeaways: Journalists would like academics to understand their tight deadlines. And academics would like journalists to take a statistics class.

feedback people
(pixabay)

Many readers of Journalist’s Resource fall into one of two camps: hardworking journalists and hardworking academics. Recently, in hopes of providing an opportunity for the two groups to learn from each other, we posed two questions in our weekly newsletter.

Our question to the scholars: Have you noticed any common mistakes in news stories about academic research? And our question to the journalists: What’s one thing you’d like academic researchers to understand about journalism?

We received thoughtful responses from lifelong journalists, lifelong scholars, scholars who are also journalists, and communications officers. Among the main takeaways: Journalists would like academics to understand their tight deadlines. And academics would like journalists to take a statistics class.

Several of the responses we received are posted below. We’ll update this post with future responses, and we’ll pose more questions in future e-mail newsletters. If you’d like to share your own response to either of the questions above, please send an e-mail to journalists_resource@hks.harvard.edu. You can also reach out to us via Facebook or Twitter.

——————————————

Margins of error and bodies of work

Clay Shirky, vice provost for educational technologies at New York University:  “Far and away the most common mistake is to convey studies as the output of scientific consensus, instead of being the input to that consensus.

If a raft of studies finds a correlation between A and B, and a single new study finds no correlation, at least one possible explanation is that the newest study was flawed. Instead, journalists almost always present ‘Study X contradicts conventional wisdom’ as if recency trumps accumulated testing.

In a way, this is not a mistake, or not just a mistake, because the attention-seeking model of journalism, coupled with the implicit promise that the reporter will both simplify and clarify the world, means journalists have every incentive to overplay individual studies. This is especially true if they find something different from the previous consensus.

Related but smaller versions of the same problem happen in reporting margins of error (when effect X was at 52% previously and is at 51% now, the incentive is to report that X is falling, not that it remains unchanged), or switching between average and median as if they mean the same thing.

In almost all cases, better reporting about academic work will mean more hedging and more minimization of the likely future effects. Some results do indeed create significant new paradigms in science (and in academia in general), but those shifts tend to be generational at best, while journalistic outlets present them as happening far more frequently.”

 

Greg Blonder, professor of product design and engineering, Boston University:  “The biggest problem discussing academic research in public is the need to better explain that a published peer reviewed paper by itself, is not an established fact. It is an opportunity for the academic community to continue a discussion, using the highest quality arguments. Eventually, settled facts emerge, but the process is messy and, often, the line between speculation and confirmation moves around for a while until it settles down. When the public watches the line quiver, they infer there is great uncertainty around the entire field, when in fact, the academics are often arguing over a fine point.

Journalists, and academics, must be clearer *in every article* about the nature of this process, and what is settled precedent, and what is open to debate.”

 

Augie Grant, Ph.D., J. Rion McKissick Professor of Journalism and director of the Center for Teaching Excellence at the University of South Carolina:  “The biggest complaint I have is that some journalists report sample statistics from research as absolute numbers, even though they all have a margin of error that has to be reported in order to interpret the number. The most common mistakes are made in reporting poll numbers and Nielsen ratings, but the same issue must be considered when reporting any academic research based upon a sample.”

 

Carolyn O’Donnell, communications director of the California Strawberry Commission:  “On the academic side, I find that they (or their university communications office[s]) tend to mask an association as a causation. It’s somewhat understandable, as it brings publicity to the department and university, and it helps build the case for continued funding for research in that particular area. On the journalist side, the publication is also looking for ‘eyes’ and ‘engagement’ with a story, and may not always dig into the details. A little bit of science education and statistics goes a long way towards better understanding the analysis, rather than just being dazzled by the data.”

On deadline(s)

Tamara Jeffries, associate professor and interim chair of the Department of Journalism and Media Studies at Bennett College in Greensboro, North Carolina; former executive editor of Essence magazine:  “My academic colleagues have a hard time understanding the news cycle –how quickly they need to turn something around for it to be timely and relevant. Publication for them means working on an article for a year; for us, it may mean finishing an article before sundown — or sooner.

Some of them are reluctant to engage in social media because they are so (rightly) committed to having information be exactly right. They don’t want to do 280 characters because that won’t allow them to explain the methodology, the p-values, etc.

Then I have learned that, at least in some disciplines, a scholar doesn’t want to be seen as too engaged in media. The term ‘public scholar’ is anathema to some folks. They want to be seen as focused on scholarly work only, not courting the press.

I’d like for my academic colleagues to understand that journalists are JUST as committed to getting the facts right as researchers are. (We are researchers, too, after all.) But our work has to be explained in a way that enables the ‘average’ reader to a.) understand and b.) care about the topic.  Dense academic language doesn’t do much toward that goal.”

 

Shula Neuman, executive editor at St. Louis Public Radio:  “Here is my answer to academics: 1.) When your PR people (or you) put out a news release about research, please make sure you are available to talk about it and not out of the country or at some planned conference. 2.) Please talk like a human and drop the jargon. That’s really for scientists more than journalist academics.”

 

Pedro Canario, editor-in-chief, ConJur, a law and justice news publication in San Paulo, Brazil:  “What would I want academic researchers to understand about journalism? Our conception of time and style are completely different. At least in Brazil, academic researchers tend to value content over form, [which] tends to alienate most readers. And our job as journalists is the complete opposite: we have the obligation of passing the message in the most approachable way possible, [which] brings an inevitable concern with style and with presenting a well-written story. So we often fall into conflict over the terms in a story, or details that are huge in the academic world, but minor to the so-called ‘common reader.’

That leads me to time. We do not have two years to work on a thesis and we often need help to understand the basics. I know it’s exhausting to a PhD to explain some undergraduate level matters, but the newspaper has to be on newsstands by morning, whether we’re finished with our revolutionary and groundbreaking piece or not. Sometimes we make mistakes, obviously, but that shouldn’t be a synonym of lazy work or stupidity, as professors usually say. Our job is ungrateful and we have to summarize years of research in a paragraph or two.”

 

About The Author