New survey finds half of newsrooms use Generative AI tools; only 20% have guidelines in place.
A new WAN-IFRA survey, conducted in collaboration with SCHICKLER Consulting, sets a barometer of where news publishers stand so far on using Generative AI.
Ever since OpenAI’s ChatGPT exploded onto the scene at the end of 2022, there has been no shortage of voices calling artificial intelligence and Generative AI specifically a game-changing technology.
The news media industry in particular is wrestling with some deep and complex questions about what Generative AI (GenAI) could mean for journalism. Although it seems less and less likely that AI will threaten journalists’ jobs, as some may have feared, news executives are asking questions, for example, about information accuracy, plagiarism and data privacy.
To get an overview of where things stand in the industry, we surveyed in late April and early May the global community of journalists, editorial managers and other news professionals about their newsrooms’ use of Generative AI tools.
101 participants all over the world took the survey; here are a few key takeaways from their responses.
WAN-IFRA Members can access the full survey results here.
Half of newsrooms already work with GenAI tools
Given that most Generative AI tools became available for the public only a few months ago – at most – it is quite remarkable that almost half (49 percent) of our survey respondents said that their newsrooms are using tools like ChatGPT. On the other hand, as the technology is still evolving quickly and in possibly unpredictable ways, it is understandable that many newsrooms feel cautious about it. This might be the case for the respondents whose companies haven’t adopted these tools (yet).
Overall, the attitude about Generative AI in the industry is overwhelmingly positive: 70 percent of survey participants said they expect Generative AI tools to be helpful for their journalists and newsrooms. Only 2 percent said they see no value in the short term, while another 10 percent are not sure. 18 percent think the technology needs more development to be really helpful.
Content summaries are the most common use case
Although there have been some slightly alarmist reactions to ChatGPT asking whether the technology could end up replacing journalists, actually the number of newsrooms using GenAI tools for the creation of articles is relatively low. Instead, the primary use case is the tools’ capability to digest and condense information, for example for summaries and bullet points, our respondents said. Other key tasks that journalists are using the technology for include simplified research / search, text correction and
improving workflows.
Going forward, the common use cases are likely to evolve however, as more and more newsrooms look for ways to make broader use of the technology and integrate it further in their operations. Our respondents highlighted personalisation, translation and a higher level of workflow / efficiency improvements as specific examples or areas of where they expect GenAI to be more helpful in the future.
Few newsrooms have guidelines for their use of GenAI
There is a wide spread of different practices when it comes to how the use of GenAI tools is controlled at newsrooms. For now, the majority of publishers have a relaxed approach: almost half of survey participants (49 percent) said that their journalists have the freedom to use the technology as they see fit. Additional 29 percent said that they are not using GenAI.
Only a fifth of respondents (20 percent) said that they have guidelines from management on when and how to use GenAI tools, while 3 percent said that the use of the technology is not allowed at their publications. As newsrooms grapple with the many complex questions related to GenAI, it seems safe to assume that more and more publishers will establish specific AI policies on how to use the technology (or perhaps forbid its use entirely).
Inaccuracies and plagiarism are newsrooms’ top concerns
Given that we’ve seen some cases where a news outlet published content that was created with the help of AI tools and that was later noticed to be false or inaccurate, it might not be surprising that inaccuracy of information / quality of content is the number one concern among publishers when it comes to AI generated content. 85 percent of survey respondents highlighted this as a specific issue they have relating to GenAI.
Another concern on publishers’ minds is issues related to plagiarism / copyright infringement, followed by data protection and privacy issues. It seems likely that the lack of clear guidelines (see the previous point) only amplifies these uncertainties and that the development of AI policies should help alleviate those, along with staff training and open communication about the responsible use of GenAI tools.