Expert Commentary

Researchers compare AI policies and guidelines at 52 news organizations around the world

Artificial intelligence is informing and assisting journalists in their work, but how are newsrooms managing its use? Research on AI guidelines and policies from 52 media organizations from around the world offers some answers.

AI
(Mojahid Mottakin / Unsplash)

In July 2022, just a few newsrooms around the world had guidelines or policies for how their journalists and editors could use digital tools that run on artificial intelligence. One year later, dozens of influential, global newsrooms had formal documents related to the use of AI.

In between, artificial intelligence research firm OpenAI launched ChatGPT, a chatbot that can produce all sorts of written material when prompted: lines of code, plays, essays, jokes and news-style stories. Elon Musk and Sam Altman founded OpenAI in 2015, with multibillion dollar investments over the years from Microsoft.

Newsrooms including USA Today, The Atlantic, National Public Radio, the Canadian Broadcasting Corporation and the Financial Times have since developed AI guidelines or policies — a wave of recognition that AI chatbots could fundamentally change the way journalists do their work and how the public thinks about journalism.

Research posted during September 2023 on preprint server SocArXiv is among the first to examine how newsrooms are handling the proliferating capabilities of AI-based platforms. Preprints have not undergone formal peer review and have not been published in an academic journal, though the current paper is under review at a prominent international journal according to one of the authors, Kim Björn Becker, a lecturer at Trier University in Germany and a staff writer for the newspaper Frankfurter Allgemeine Zeitung.

The analysis provides a snapshot of the current state of AI policies and documents for 52 news organizations, including newsrooms in Brazil, India, North America, Scandinavia and Western Europe.

Notably, the authors write that AI policies and documents from commercial news organizations, compared with those that receive public funding, “seem to be more fine-grained and contain significantly more information on permitted and prohibited applications.”

Commercial news organizations were also more apt to emphasize source protection, urging journalists to take caution when, for example, using AI tools for help making sense of large amounts of confidential or background information, “perhaps owing to the risk legal liability poses to their business model,” they write.

Keep reading to learn what else the researchers found, including a strong focus on journalistic ethics across the documents, as well as real world examples of AI being used in newsrooms — plus, how the findings compare with other recent research.

AI guidance and rules focus on preserving journalistic values

AI chatbots are a type of generative AI, meaning they create content when prompted. They are based on large language models, which themselves are trained on huge amounts of existing text. (OpenAI rivals Google and Meta in the past year have announced their own large language models).

So, when you ask an AI chatbot to write a three-act play, in the style of 19th century Norwegian playwright Henrik Ibsen, about the struggle for human self-determination in a future dominated by robots, it is able to do this because it has processed Ibsen’s work along with the corpus of science fiction about robots overtaking humanity.

Some news organizations for years have used generative AI for published stories, notably the Associated Press for simple coverage of earnings reports and college basketball game previews. Others that have dabbled in AI-generated content have come under scrutiny for publishing confusing or misleading information.

The authors of the recent preprint paper analyzed the AI policies and guidelines, most of them related to generative AI, to understand how publishers “address both expectations and concerns when it comes to using AI in the news,” they write.

The most recent AI document in the dataset is from NPR, dated July 2023. The oldest is from the Council for Mass Media, a self-regulatory body of news organizations in Finland, dated January 2020.

“One thing that was remarkable to me is that the way in which organizations dealt with AI at this stage did exhibit a very strong sense of conserving journalistic values,” says Becker. “Many organizations were really concerned about not losing their credibility, not losing their audience, not trying to give away what makes journalism stand out — especially in a world where misinformation is around in a much larger scale than ever before.”

Other early adopters include the BBC and German broadcaster Bayerischer Rundfunk, “which have gained widespread attention through industry publications and conferences,” and “have served as influential benchmarks for others,” the authors write.

Many of the documents were guidelines — frameworks, or best practices for thinking about how journalists interact with and use AI, says Christopher Crum, a doctoral candidate at Oxford University and another co-author. But a few were prescriptive policies, Crum says.

Among the findings:

  • Just over 71% of the documents mention one or more journalistic values, such as public service, objectivity, autonomy, immediacy — meaning publishing or broadcasting news quickly — and ethics.
  • Nearly 70% of the AI documents were designed for editorial staff, while most of the rest applied to an entire organization. This would include the business side, which might use AI for advertising or hiring purposes. One policy only applied to the business side.
  • And 69% mentioned AI pitfalls, such as “hallucinations,” the authors write, in which an AI system makes up facts.
  • About 63% specified the guidelines would be updated at some point in the future — 6% of those “specified a particular interval for updates,” the authors write — while 37% did not indicate if or when the policies would be updated.
  • Around 54% of the documents cautioned journalists to be careful to protect sources when using AI, with several addressing the potential risk of revealing confidential sources when feeding information into an AI chatbot.
  • Some 44% allow journalists to use AI to gather information and develop story ideas, angles and outlines. Another 4% disallow this use, while half do not specify.
  • Meanwhile, 42% allow journalists to use AI to alter editorial content, such as editing and updating stories, while 6% disallow this use and half do not specify.
  • Only 8% state how the AI policies would be enforced, while the rest did not mention accountability mechanisms.

How the research was conducted

The authors found about two-thirds of the AI policy documents online and obtained the remainder through professional and personal contacts. About two-fifths were written in English. The authors translated the rest into English using DeepL, a translation service based on neural learning, a backbone of AI.

They then used statistical software to break the documents into five-word blocks, to assess their similarity. It’s a standard way to linguistically compare texts, Crum says. He explains that the phrase “I see the dog run fast” would have two five-word blocks: “I see the dog run,” and “see the dog run fast.”

If one document said, “I see the dog run fast” while another said, “I see the dog run quickly,” the first block of five words would be the same, the second block different — and the overall similarity between the documents would be lower than if the sentences were identical.

As a benchmark for comparison, the authors performed the same analysis on the news organizations’ editorial guidelines. The editorial guidelines were a bit more similar than the AI guidelines, the authors find.

“Because of the additional uncertainty in the [AI] space, the finding is that the AI guidelines are coalescing at a slightly lower degree than existing editorial guidelines,” Crum says. “The potential explanation might be, and this is speculative and not in the paper, something along the lines of, editorial guidelines have had more time to coalesce, whereas AI guidelines at this stage, while often influenced by existing AI guidelines, are still in the nascent stages of development.”

The authors also manually identified overarching characteristics of the documents relating to journalistic ethics, transparency and human supervision of AI. About nine-tenths of the documents specified that if AI were used in a story or investigation, that had to be disclosed.

“My impression is not that organizations are afraid of AI,” Becker says. “They encourage employees to experiment with this new technology and try to make some good things out of it — for example, being faster in their reporting, being more accurate, if possible, finding new angles, stuff like that. But at the same time, indicating that, under no circumstances, shall they pose a risk on journalistic credibility.”

AI in the newsroom is evolving

The future of AI in the newsroom is taking shape, whether that means journalists primarily using AI as a tool in their work, or whether newsrooms become broadly comfortable with using AI to produce publicly facing content. The Journalist’s Resource has used DALL.E 2, an OpenAI product, to create images to accompany human-reported and written research roundups and articles.

Journalists, editors and newsroom leaders should, “engage with these new tools, explore them and their potential, and learn how to pragmatically apply them in creating and delivering value to audiences,” researcher and consultant David Caswell writes in a September 2023 report for the Reuters Institute for the Study of Journalism at Oxford. “There are no best practices, textbooks or shortcuts for this yet, only engaging, doing and learning until a viable way forward appears. Caution is advisable, but waiting for complete clarity is not.”

The Associated Press in 2015 began using AI to generate stories on publicly traded firms’ quarterly earnings reports. But recently, the organization’s AI guidelines released during August 2023 specify that AI “cannot be used to create publishable content and images for the news service.”

The AP had partnered with AI-content generation firm Automated Insights to produce the earnings stories, The Verge reported in January 2015. The AP also used Automated Insights to generate more than 5,000 previews for NCAA Division I men’s basketball games during the 2018 season.

Early this year, Futurism staff writer Frank Landymore wrote that tech news outlet CNET had been publishing AI-generated articles. Over the summer, Axios’ Tyler Buchanan reported USA Today was pausing its use of AI to create high school sports stories after several such articles in The Columbus Dispatch went viral for peculiar phrasing, such as “a close encounter of the athletic kind.”

And on Nov. 27, Futurism published an article by Maggie Harrison citing anonymous sources alleging that Sports Illustrated has recently been using AI-generated content and authors, with AI-generated headshots, for articles on product reviews.

Senior media writer Tom Jones of the Poynter Institute wrote the next day that the “story has again unsettled journalists concerned about AI-created content, especially when you see a name such as Sports Illustrated involved.”

The Arena Group, which publishes Sports Illustrated, posted a statement on X the same day the Futurism article was published, denying that Sports Illustrated had published AI-generated articles. According to the statement, the product review articles produced by a third-party company, AdVon Commerce, were “written and edited by humans,” but “AdVon had writers use a pen or pseudo name in certain articles to protect author privacy — actions we strongly condemn — and we are removing the content while our internal investigation continues and have since ended the partnership.”

On Dec. 11, the Arena Group fired its CEO. Arena’s board of directors “met and took actions to improve the operational efficiency and revenue of the company,” the company said in a brief statement, which did not mention the AI allegations. Several other high level Arena Group executives were also fired last week, including the COO, according to the statement.

Many of the 52 policies reviewed for the preprint paper take a measured approach. About half caution journalists against feeding unpublished work into AI chatbots. Many of those that did were from commercial organizations.

For example, reporters may obtain voluminous government documents, or have hundreds of pages of interview notes or transcripts and may want to use AI to help make sense of it all. At least one policy advised reporters to treat anything that goes into an AI chatbot as published — and publicly accessible, Becker says.

Crum adds that the research team was “agnostic” in its approach — not for or against newsrooms using AI — with the goal of conveying the current landscape of newsroom AI guidelines and policies.

Themes on human oversight in other recent research

Becker, Crum and their coauthor on the preprint, Felix Simon, a communication researcher and doctoral student at Oxford, are among a growing body of scholars and journalists interested in informing how newsrooms use AI.

In July, University of Amsterdam postdoctoral researcher Hannes Cools and Northwestern University communications professor Nick Diakopoulos published an article for the Generative AI in the Newsroom project, which Diakopoulos edits, examining publicly available AI guidelines from 21 newsrooms.

Cools and Diakopoulos read the documents and identified themes. The guidelines generally stress the need for human oversight. Cools and Diakopoulos examined AI documents from many of the same newsrooms as the preprint authors, including the CBC, Insider, Reuters, Nucleo, Wired and Mediahuis, among others.

“At least for the externally facing policies, I don’t see them as enforceable policies,” says Diakopoulos. “It’s more like principal statements: ‘Here’s our goals as an organization.’

As for feeding confidential material into AI chatbots, Diakopoulos says that the underlying issue is about potentially sharing that information with a third party — OpenAI, for example — not in using the chatbot itself. There are “versions of generative AI that run locally on your own computer or on your own server,” and those should be unproblematic to use as a journalistic tool, he says.

“There was also what I call hybridity,” Diakopoulos says. “Kind of the need to have humans and algorithms working together, hybridized into human-computer systems, in order to keep the quality of journalism high while also leveraging the capabilities of AI and automation and algorithms for making things more efficient or trying to improve the comprehensiveness of investigations.”

For local and regional newsrooms interested in developing their own guidelines, there may be little need to reinvent the wheel. The Paris Charter, developed among 16 organizations and initiated by Reporters Without Borders, is a good place to start for understanding the fundamental ethics of using AI in journalism, Diakopoulos says.

Examples of AI-related newsroom guidelines

Click the links for examples of media organizations that have created guides for journalists on using AI to produce the news. Has your newsroom posted its AI guidelines online? Let us know by sending a link to clark_merrefield@hks.harvard.edu.

Associated Press | Bayerischer Rundfunk | BBC | Council of Europe | Financial Times | The Guardian | Insider | New York Times | Radio Television Digital News Association | Wired

About The Author