Google will require political advertisements on its platforms to disclose any alterations or AI-generated content in images and audio, starting in November. This policy change comes ahead of a contentious US presidential election and aims to address concerns about the use of generative AI to mislead voters. “For years we’ve provided additional levels of transparency for election ads,” said a Google spokesperson. “Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated.”
In June, a Ron DeSantis campaign video shared on X, formerly known as Twitter, was determined by an AFP Fact Check team to have featured images created using AI. The video depicted former US President Donald Trump kissing Anthony Fauci on the cheek. Google’s current ad policies already prohibit manipulating digital media to deceive or mislead regarding politics or public concerns. False claims that could undermine trust in the election process are also forbidden. Google requires political ads to disclose their source of funding and provides information about the ads in an online library.
The upcoming update will require election-related ads to prominently disclose the presence of “synthetic content” depicting real or realistic-looking people or events. Google continues to invest in technology for detecting and removing such content. The disclosure of digitally altered content in election ads must be clear, conspicuous, and placed where it is likely to be noticed. Examples of labels that could be used include “This image does not depict real events” or “This video content was synthetically generated.”