During a Congressional Internet Caucus Academy briefing this week, experts argued the impact of artificial intelligence on the 2024 election was less extreme than predicted — but deepfakes and misinformation still played a role.

There were major concerns leading up to the 2024 election that AI would disrupt elections through false information; overall, the impact was less extreme than experts had warned it could be. However, AI still had an effect, as seen by way of deepfakes, like the Biden robocall and misinformation from AI-powered chatbots.

“We did not see widespread use of AI tools to create deepfakes that would somehow sway the election,” said Jennifer Huddleston, senior fellow in technology policy at the Cato Institute.


And while the widespread AI-driven “apocalypse” predicted by some experts was not actualized, there was still a significant amount of misinformation. The Biden robocall was the most notable deepfake example in this election cycle. But as Tim Harper, senior policy analyst and project lead for the Center for Democracy and Technology, explained, there were several instances of AI’s misuse. These included fake websites generated by foreign governments and deepfakes spreading misinformation about candidates.

In addition to that kind of misinformation, Harper emphasized a major concern was how AI tools could be used to target folks at more of a micro level than has previously been seen, which he said did occur during this election cycle. Examples include AI-generated texts to Wisconsin students that were deemed intimidating, and incidents of non-English misinformation campaigns targeting Spanish speaking voters, intended to create confusion. AI’s role in this election, Harper said, has impacted public trust and the perception of truth.

A positive trend seen this year, according to Huddleston, was that the existing information ecosystem helped combat AI-powered misinformation. For example, with the Biden robocall, there was a quick response, allowing voters to be more informed and discerning about what to believe.

Huddleston said she believes it is too soon to predict precisely how this technology will evolve and how AI’s public perception and adoption may look. But she said using education as a policy tool can help improve understanding of AI risks and reduce misinformation.

Internet literacy is still developing, Harper said; he expects to see a similarly slow increase in AI literacy and adoption: “I think public education around these sorts of threats is really important.”

AI REGULATION AND ELECTIONS

While bipartisan legislation was introduced to fight AI-generated deepfakes, it was not passed prior to the election. However, other regulatory protections do exist.

Harper pointed to the Federal Communications Commission (FCC) ruling that the Telephone Consumer Protection Act (TCPA) does regulate robocalls using artificially generated speech. So, this does apply to the Biden robocall, the perpetrators of which were held accountable.

Unfortunately, regulatory gaps still remain, even in this case. The TCPA does not apply to nonprofit organizations, religious institutions, or calls to landlines. Harper said the FCC is transparent about, and pushing to close, such “loopholes.”

Regarding legislation to combat AI risks, Huddleston said that in many cases, there are already some protections in place, and she argued the issue is not always AI technology itself, but rather improper use. She said those regulating this technology should be careful not to wrongfully condemn tech that can be beneficial, but consider whether problems are new or if they are existing problems with AI creating an added layer.

There have been many states that have implemented their own AI legislation, and Huddleston cautioned this “patchwork” of legislation could create barriers to developing and deploying AI technologies.

Harper noted there are valid First Amendment concerns about overregulating AI. He argued that more regulation is needed, but whether that may occur through agency-level regulation or new lawmaking is yet to be seen.

To combat the lack of comprehensive federal legislation addressing AI use in elections, many private-sector tech companies have attempted to self-regulate. According to Huddleston, this is not due solely to government pressure, but also results from consumer demand.

Huddleston noted that broad definitions of AI in the regulatory world could also inadvertently restrict beneficial applications of AI.

She explained that many of these are innocuous applications, such as speech-to-text software and navigation platforms to find the best route between campaign events. The use of AI for things like captioning can also build capacity for campaigns with limited resources.

AI can help identify potential instances of a campaign being hacked, Huddleston said, helping campaigns be more proactive in the case of a security threat.

“It’s not just the campaigns who can benefit from certain uses of this technology,” Harper said, underlining that election officials can use this technology to educate voters, to inform planning, conduct post-election analysis, and to increase efficiency.

While this briefing addressed the impact of AI on the election, there are still questions about the impact of the election on AI. It is important to note that the incoming administration’s platform included revoking the Biden administration’s executive order on AI, Huddleston said, adding that whether it will be revoked and replaced or revoked without a replacement remains to be seen.





Source link

author-sign