In the lead-up to Election Day, OpenAI reports that ChatGPT declined over 250,000 image requests related to U.S. presidential candidates. These rejections included attempts to generate images of notable figures like President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Governor Tim Walz, and Vice President-elect JD Vance, as noted by OpenAI in a blog post on Friday.
The rapid growth of generative AI has heightened concerns about misinformation’s impact on global elections throughout 2024. Data from machine learning firm Clarity indicates a 900% year-over-year surge in deepfakes. Some of these manipulated videos have allegedly been created or funded by Russian operatives aiming to interfere in U.S. elections, according to American intelligence sources.
In an October report spanning 54 pages, OpenAI detailed how it disrupted more than 20 global operations and deceptive networks attempting to misuse its models. These ranged from AI-generated news articles to fake social media accounts designed to spread disinformation. OpenAI emphasized that none of these attempts achieved “viral engagement.”
In its Friday statement, OpenAI noted it had found no evidence that covert campaigns using its AI products were able to influence U.S. election outcomes by going viral or drawing “sustained audiences.”
With generative AI’s rise since ChatGPT’s debut in late 2022, lawmakers have been increasingly worried about AI-driven misinformation. These models, while advanced, still regularly produce errors.
“Voters categorically should not rely on AI chatbots for election information — there are far too many issues with accuracy and completeness,” warned Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, in a recent statement to CNBC.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.