Study finds that Media has no policies in place regarding AI-generated images
According to a study led by RMIT University, media outlets must establish clear policies and procedures regarding the use of AI-generated imagery.
The research, conducted in collaboration with Washington State University and the QUT Digital Media Research Centre, involved interviews with 20 photo editors from 16 public and commercial media organizations across Europe, Australia, and the United States. The study revealed that only slightly over one-third of these organizations have policies in place for the generation, use, and labeling of images produced by artificial intelligence.
Some of the findings showed that five organizations prohibited staff from using AI to create images, three prohibited only photorealistic images, and the rest allowed the use of AI-generated images specifically for stories related to AI.
Lead researcher and RMIT senior lecturer TJ Thomson emphasized the importance of transparency when utilizing generative AI technologies in media. He noted that while photo editors are keen on disclosing the use of AI, media outlets cannot always regulate how images are perceived by audiences or displayed on other platforms.
Dangers Present When AI-Generated Images Resemble Reality
There was a general willingness among participants to employ AI to generate non-photorealistic illustrations or to fill gaps in stock image libraries with AI-generated images. However, the concern arises when the source of an image is not clearly communicated to viewers.
The study highlighted instances where AI-generated images, such as those depicting the Pope wearing Balenciaga, went viral without proper context, leading many to mistake them for real photographs due to their near-photorealistic quality.
One of the key recommendations from the study was the need for detailed policies and processes outlining the appropriate use of generative AI in newsrooms to prevent incidents of misinformation and disinformation.
Despite urging for transparency in media organizations’ policies regarding AI, the study did not advocate for a complete ban on AI usage in newsrooms. Instead, it emphasized the importance of proactive measures in regulating AI technology to ensure a safer online environment.