OpenAI’s ChatGPT turned down more than 250,000 requests to generate images of 2024 U.S. presidential candidates leading up to Election Day, the company reported in a blog post on Friday. The rejections included requests to generate images involving notable political figures like President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Governor Tim Walz, and Vice President-elect JD Vance, OpenAI confirmed.
As the capabilities of generative artificial intelligence (AI) expand, concerns about AI-driven misinformation’s potential impact on global elections are mounting. Clarity, a machine learning analytics firm, reported a 900% year-over-year increase in deepfake content, some of which, according to U.S. intelligence, includes videos produced or funded by Russian actors aiming to interfere in the U.S. election.
OpenAI’s October report, spanning 54 pages, revealed that it had intercepted over 20 deceptive operations attempting to use its AI models for misinformation. These threats involved AI-generated articles, social media posts from fake accounts, and other tactics. However, OpenAI noted that none of these election-related efforts achieved widespread engagement.
In its Friday blog post, OpenAI reassured the public, stating there is no evidence that covert efforts leveraging its tools successfully reached viral status or developed large audiences. Nevertheless, the role of generative AI in spreading election misinformation remains a pressing concern for lawmakers. With the rapid rise of tools like ChatGPT since late 2022, questions about the reliability and accuracy of AI-generated content continue to grow.
“Voters categorically should not look to AI chatbots for information about voting or the election — there are far too many concerns about accuracy and completeness,” said Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, in a statement to CNBC.