Since 2020, voters in California and beyond have seen disinformation proliferate and poison our politics more than ever before. Generative artificial intelligence has the power to rapidly intensify this trend for the worse.
After all, we’re one year away from another enormously consequential presidential election.
With current AI tools, it doesn’t take much effort to make a fake audio recording of Joe Biden saying anything you want. Imagine if a robocall from “Joe Biden” shared false information about polling place changes with hundreds of thousands of voters on the eve of the next election.
Or imagine a conspiracy theorist making a fake video of an elections official “caught on tape” admitting that their voting machines can be hacked, and then publishing it to a fake news site populated with otherwise regular content and engineered to look like a local newspaper. New AI tools can make all of this, easily.
With no current avenue to directly address the damage these new digital threats pose to our elections, Californians are demanding more from their lawmakers. Overwhelmingly so, according to a new Berkeley IGS poll.
Specifically, 84% of California voters – and strong majorities among all age groups, regions, racial groups, parties, and among men and women – are concerned about the dangers that disinformation, deepfakes and AI pose to our democracy. Nearly 3 in 4 voters believe state government has a responsibility to take action.
Additionally, the poll found that 78% of voters believe social media companies as responsible for the spread of disinformation while also incapable of solving the problem, with another 87% of voters showing extraordinary support for increased transparency and accountability around deepfakes and algorithms.
The numbers speak for themselves – consensus is rare, but nearly all Californians agree on stopping AI and disinformation’s potentially disastrous impact on our elections.
This should be a wakeup call for the California Legislature.
The 2024 election is set to be the nation’s first full-fledged AI election, where AI-generated deepfakes will become a routine part of our information ecosystems. Some threats are already emerging. Without proper action, voters will not know what images, audio and video they can trust.
A challenge as enormous as this requires broad expertise and multiple strategies employed simultaneously to make a measurable impact.
With the federal government poorly positioned to take urgent, necessary action, and with Sacramento missing an unbiased, nonpartisan authority to lead such efforts, new leaders are emerging to drive change and fill the gap.
This is exactly why groups like the California Institute for Technology and Democracy, or CITED, were created. It makes perfect sense that the solutions to these existential threats are budding in America’s technology capital: California.
CITED brings together thought leaders in tech, law, public policy, civil rights, civic engagement and academia to drive pragmatic, high-impact, state-level solutions to protect our democracy in the modern age. The group’s advisors include former Democrat and Republican legislators, former civic trust and integrity executives from the nation’s largest social media platforms, leading academics studying cybersecurity and digital threats, among others.
CITED will be pushing a legislative strategy in 2024, and intends to take an active role during the election cycle around use of AI and deepfakes – work just as critical as policy pursuits.
Nearly all Californians agree: AI and disinformation are an existential threat to our democracy. Meaningful action must be taken in the next legislative session. Change cannot wait another election cycle.
Jonathan Mehta Stein is the executive director of California Common Cause.
This article originally appeared on Palm Springs Desert Sun: Californians want lawmakers to safeguard elections from artificial intelligence