What do AI Charters say About News Media Today? 

17 April 2024
Share via email Share

In a recent New York Times interview, Jim VandeHei, Axios CEO, warned that AI will “eviscerate the weak, the ordinary, the unprepared in media.” That is why, since Gen AI’s recent surge into the workplace, a growing number of publishers and news organizations have decided to formalize its use in AI charters. 

We reviewed several news publishers’ AI charters (also known as guidelines, principles, or frameworks) and distilled three insights from them to understand where publishers are today regarding the media’s relationship with AI.  

We reviewed the following AI charters for this article: De Tijd, L’Echo, Roularta, The Telegraph, The Guardian, Le Monde, Mediahuis, and Reporters Without Borders (along with a coalition of 16 other publishing groups).

Trust becomes an even more important unique selling point for publishers.

Gert Ysebaert, Mediahuis CEO, stated at the 2024 Mather Symposium that “Trust is the difference between professional journalism and social media.” 

Trust has always been the bedrock of journalism and a key differentiator from other information sources. Therefore, it comes as no surprise that every AI charter reviewed for this article emphasizes the importance of maintaining brand trust and reliability.  

De Tijd and L’Echo’s charter perhaps puts it best:  

“Our journalism remains at its core journalism performed by human journalists: gathering news, checking information, and writing is a human process. The use of AI technology is always to support the human process.”  

In short: AI is supportive, not substitutive. Indeed, building rapport with readers and sources is “not something you can do from typing a prompt into ChatGPT”.  

What do the people think of AI journalism?

A recent YouGov poll confirmed that humans instill more trust than AI in the media sphere, stating that “around half of Britons say they would trust a news article written by a human journalist and overseen by a human editor. Replacing [either a journalist or an editor] with an AI reduced trust to about a quarter; replacing both with AI reduced it to just one in eight.”

What are the tangible impacts of this reaffirmation of journalism’s main selling point?  

For Axios, it means investing in more events, something AI will have little impact on, and placing the spotlight on its journalists to create a personal relationship with readers – a strategy that works to build trust in the creator/influencer economy.   

The AI charters also stress the necessity of transparency as a key pillar of trust. In practice, this means informing users upfront when AI is used. Its use comes with a hierarchy of responsibilities, with the editors-in-chief as the top responsible for AI’s use in the newsroom. For instance, the Mediahuis charter states that:  

“The editor-in-chief oversees the application and implementation of AI technologies in the newsroom in order to ensure it adheres to existing journalism codes and legal/ethical standards” 

Now is the time for media organizations to lobby and shape AI regulation.

With AI in its infancy stages, a critical window for shaping the rules that will govern AI’s future use has opened.  

The Paris Charter, an AI framework adopted by Reporters Without Borders alongside 16 media and journalist groups, highlighted why media organizations should play a role in AI’s governance: “As essential guardians of the right to information, journalists, media outlets and journalism support groups should play an active role in the governance of AI systems.”  

That said, perspectives on the effectiveness and purpose of such lobbying efforts vary. For instance, Axios has expressed skepticism about the benefits of lobbying AI companies directly for compensation, suggesting that this approach mirrors past efforts with large social media companies, which they view as largely fruitless. Rather, they argue that publishers should focus on investing in better product and content offerings.  

Conversely, organizations like the Associated Press appear more eager to engage in these early stages of AI regulation, having come to a compensatory agreement with Open AI. They are actively seeking to influence the relationship between the media and AI companies, perhaps hoping to set precedents that could benefit the broader media industry and ensure fair practices in the use of AI in journalism and beyond. 

AI’s role is mainly in the background but to different degrees, say publishers.

It is widely accepted that AI should play a supporting role rather than taking center stage. The consensus across the reviewed AI charters is that AI can be used for tasks of moderate to low risk. These tasks generally refer to what comes after the content of an article has been developed.  

This can include using AI for article/video transcription, translating content into various languages, or synthesizing articles into digestible summaries. Le Monde in English, for instance, translates content from its main French website first using AI, and then involving a journalist and translator.  

There’s also a growing openness to utilizing AI in the preliminary stages of journalism, such as during the ideation process (as mentioned in The Telegraph and Guardian guidelines) or sifting through substantial datasets (as mentioned in De Tijd and L’Echo’s).  

However, not all news organizations are on board with integrating AI into their operations. For example, The Telegraph has opted out of using AI tools for any writing assistance purposes, reflecting a cautious approach amidst the industry’s ongoing debate about the ethical implications and the impact on journalistic quality. Additionally, the use of AI images is also not uniformly adopted. Le Monde, for example, does not permit its use, while Roularta permits it as long as it is clearly stated: “Images generated by AI are used to illustrate some of our articles. This concerns illustrations supporting the text and not fictional photographs. The origin of each AI-generated image will be explicitly stated.”

When all’s said and done

Felix M. Simon, an Oxford doctoral candidate specializing in AI and journalism said in an interview with POLITICO that, “[the assumption that] AI will improve journalism is not a foregone conclusion.” It requires managers and editors to actively steer their newsrooms and media organizations in their use of this technology. Using AI to create large amounts of low-quality content is a real possibility (think of CNET’s series of erroneous AI articles), hence why such AI charters help set the direction of its use.  

As the technology evolves, these AI charters will too. But as they stand today, we can observe that:  

  • Trust is being showcased more obviously as journalism’s unique selling point and publishers are tailoring their offerings to promote this.  
  • Some (but not all) publishers are interested in lobbying and setting the overarching AI regulation at this critical window.  
  • AI’s role is mainly in the back office, but even so, publishers have not embraced it to the same extent.  

Other Blog Posts

Industry news The future of storytelling: Modular journalism, fractal stories, and Smart Brevity
11 April 2024
A stepping stone to personalized news: Modular journalism News apps, websites, and social media all offer some sort of algorithmic-driven sorting of articles and content, often tweaked to meet the user’s…
Read more

Stay on top of the game

Join our community of industry leaders. Get insights, best practices, case studies, and access to our events.

"(Required)" indicates required fields

Get insights on Digital Publishing direct in your inbox

Subscribe to Twipe’s weekly newsletter and receive insights, inspiring content and event invitations directly in your inbox!

"(Required)" indicates required fields