close
close

AI Chatbots Aren't Reliable on Voting Issues: Government Officials

AI Chatbots Aren't Reliable on Voting Issues: Government Officials

New York Attorney General Letitia James speaks during a press conference at the Attorney General's Office in New York on February 16, 2024.

Timothy A. Clary | AFP | Getty Images

With four days until the presidential election, U.S. government officials are warning against relying on artificial intelligence chatbots for election-related information.

In a consumer alert Friday, New York Attorney General Letitia James' office said it “tested several AI-powered chatbots by asking sample questions about voting and found that they often provided inaccurate information in response.”

Election Day is Tuesday in the United States, and Republican candidate Donald Trump and Democratic Vice President Kamala Harris are in a virtual dead heat.

“New Yorkers who rely on chatbots rather than official government sources to answer their voting questions risk being misinformed and could even lose their chance to vote because of the inaccurate information,” James' office said.

It's an important year for political campaigns worldwide. Elections are taking place that affect more than 4 billion people in more than 40 countries. The rise of AI-generated content has raised serious concerns about election-related misinformation.

According to data from Clarity, a machine learning company, the number of deepfakes has increased by 900% year-over-year. Some included videos created or paid for by Russians to disrupt the U.S. election, U.S. intelligence officials say.

Lawmakers are particularly concerned about misinformation in the age of generative AI, which emerged in late 2022 with the launch of OpenAI's ChatGPT. Large language models are still new and routinely output inaccurate and unreliable information.

“Voters generally should not turn to AI chatbots for information about voting or the election – there are far too many concerns about accuracy and completeness,” Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC. “Study after study has shown examples of AI chatbots hallucinating information about polling places, voting accessibility, and acceptable ways to vote.”

In a July study, the Center for Democracy & Technology found that to 77 different election-related queries, more than a third of the responses generated by AI chatbots contained false information. The study tested chatbots from Mistral, GoogleOpenAI, Anthropic and Meta.

“We agree with the New York Attorney General that voters should consult official channels to understand where, when and how to vote,” an Anthropic spokesperson told CNBC. “For specific election and voting information, we direct users to reliable sources as Claude is not trained frequently enough to provide real-time information about specific elections.”

OpenAI said in a recent blog post: “Starting November 5, people who ask ChatGPT for election results will see a message encouraging them to look up news sources such as the Associated Press⁠ and Reuters⁠ or their state or local election board.” most comprehensive and most current information.”

In a 54-page report released last month, OpenAI said it disrupted “more than 20 operations and fraudulent networks from around the world that attempted to exploit our models.” The threats ranged from AI-generated website articles to social media posts from fake accounts, the company wrote, although none of the election-related operations were able to generate “viral engagement.”

As of November 1, the Voting Rights Lab has tracked 129 bills in 43 state legislatures that contain provisions intended to regulate AI's potential to produce election disinformation.

REGARD: More than a quarter of new code is now generated by AI

Google: More than a quarter of new code is now generated by AI

Leave a Reply

Your email address will not be published. Required fields are marked *