ChatGPT Declined over 250,000 Requests to Generate Images of Presidential Candidates Ahead of Election
OpenAI made it clear that safety measures were implemented in ChatGPT to reject requests for generating images of real individuals.
As of Friday, OpenAI disclosed that ChatGPT turned down over 250,000 requests to create images of U.S. presidential candidates before the general election on Nov. 5.
The AI chatbot blocked requests to produce images of former President Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Sen. JD Vance (R-Ohio) in the month preceding Election Day, as confirmed by OpenAI.
OpenAI emphasized the importance of these safeguards in the context of elections and part of their broader initiatives to prevent the misuse of their tools for deceptive or harmful purposes.
Additionally, ChatGPT directed inquiries about U.S. voting to CanIVote.org as part of its safety protocol during this election season, according to the blog post.
OpenAI highlighted their focus on identifying and thwarting attempts to utilize their models to generate content for covert influence operations targeting this year’s global elections.
The company confirmed that there was no evidence of covert operations attempting to influence the U.S. election gaining substantial traction or building lasting audiences through the use of their models.
The report mentioned that OpenAI disrupted attempts to produce social media content related to elections in the United States, Rwanda, India, and the European Union, but there was no indication that these networks managed to attract significant engagement or build lasting audiences using their tools.
“Threat actors are constantly evolving and testing our models, but there is no evidence of them making significant breakthroughs in creating new malware or building viral audiences,” the report noted.
The pact specifically focuses on AI-generated audio, video, and images designed to deceive voters and manipulate election processes. The companies committed to collaborating on enhancing their existing efforts in this area, as stated in a press release.
The participating companies agreed to various actions, including developing technology to detect and counteract deepfakes, mitigating risks, fostering cross-industry resilience, and offering transparency to the public regarding their initiatives.
Caden Pearson contributed to this report.