US News

ChatGPT Declined over 250,000 Requests to Generate Images of Presidential Candidates Ahead of Election


OpenAI made it clear that safety measures were implemented in ChatGPT to reject requests for generating images of real individuals.

As of Friday, OpenAI disclosed that ChatGPT turned down over 250,000 requests to create images of U.S. presidential candidates before the general election on Nov. 5.

The AI chatbot blocked requests to produce images of former President Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Sen. JD Vance (R-Ohio) in the month preceding Election Day, as confirmed by OpenAI.

In a blogpost on Nov. 8, the company stated, “We’ve applied safety measures to ChatGPT to refuse requests to generate images of real people, including politicians.”

OpenAI emphasized the importance of these safeguards in the context of elections and part of their broader initiatives to prevent the misuse of their tools for deceptive or harmful purposes.

Additionally, ChatGPT directed inquiries about U.S. voting to CanIVote.org as part of its safety protocol during this election season, according to the blog post.

OpenAI highlighted their focus on identifying and thwarting attempts to utilize their models to generate content for covert influence operations targeting this year’s global elections.

The company confirmed that there was no evidence of covert operations attempting to influence the U.S. election gaining substantial traction or building lasting audiences through the use of their models.

As detailed in its October report, OpenAI stated that they intervened in more than 20 operations and deceptive networks globally that sought to exploit their models for various activities like malware creation, article writing, and content generation through fake personas on social media.

The report mentioned that OpenAI disrupted attempts to produce social media content related to elections in the United States, Rwanda, India, and the European Union, but there was no indication that these networks managed to attract significant engagement or build lasting audiences using their tools.

“Threat actors are constantly evolving and testing our models, but there is no evidence of them making significant breakthroughs in creating new malware or building viral audiences,” the report noted.

Earlier this year, a coalition of 20 major tech companies—such as OpenAI, Google, and Meta—signed an agreement affirming their dedication to preventing deceitful use of AI in global elections this year.

The pact specifically focuses on AI-generated audio, video, and images designed to deceive voters and manipulate election processes. The companies committed to collaborating on enhancing their existing efforts in this area, as stated in a press release.

The participating companies agreed to various actions, including developing technology to detect and counteract deepfakes, mitigating risks, fostering cross-industry resilience, and offering transparency to the public regarding their initiatives.

Caden Pearson contributed to this report.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.