Stay up-to-date with free briefings on topics that matter to all Californians. Subscribe to CalMatters today for nonprofit news in your inbox.
California legislators just sent Gov. Gavin Newsom more than a dozen bills regulating artificial intelligence, testing for threats to critical infrastructure, curbing the use of algorithms on children, limiting the use of deepfakes, and more.
But people in and around the AI industry say the proposed laws fail to stop some of the most worrisome harms of the technology, like discrimination by businesses and government entities. At the same time, the observers say, whether passed bills get vetoed or signed into law may depend heavily on industry pressure, in particular accusations that the state is regulating itself out of competitiveness in a hot field.
Debates over the bills, and decisions by the governor on whether to sign each of them, are particularly important because California is at the epicenter of AI development, with many legislators making pledges this year to regulate the technology and put the state at the forefront of protecting people from AI around the world.
Without question, Senate Bill 1047 got more attention than any other AI regulation bill this year — and after it passed both chambers of the legislature by wide margins, industry and consumer advocates are closely watching to see whether Newsom signs it into law.
Introduced by San Francisco Democratic Sen. Scott Wiener, the bill addresses huge potential threats posed by AI, requiring developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure and help non-experts make chemical, biological, radioactive, and nuclear weapons. It also protects whistleblowers who want to report such threats from inside tech companies.
But what if the most concerning harms from AI are commonplace rather than apocalyptic? That’s the view of people like Alex Hanna, head of research at Distributed AI Research, a nonprofit organization created by former Google ethical AI researchers based in California. Hanna said 1047 shows how California lawmakers focused too much on existential risk and not enough on preventing specific forms of discrimination. He would much rather lawmakers consider banning the use of facial recognition in criminal investigations since that application of AI has already been shown to lead to racial discrimination. He would also like to see government standards around potentially discriminatory technology adopted by contractors.
“I think 1047 got the most noise for God knows what reason but they’re certainly not leading the world or trying to match what Europe has in this legislation,” he said of California’s legislators.
Bill against AI discrimination is stripped
One bill that did address discriminatory AI was gutted and then shelved this year. Assembly Bill 2930 would have required AI developers perform impact assessments and submit them to the Civil Rights Department and would have made use of discriminatory AI illegal and subject to a $25,000 fine for each violation.
The original bill sought to make use of discriminatory AI illegal in key sectors of the economy including housing, finance, insurance, and health care. But author Rebecca Bauer-Kahan, a San Ramon Democrat, yanked it after the Senate Appropriations Committee limited the bill to assessing AI in employment. That sort of discrimination is already expected to be curbed by rules that the California Civil Rights Department and California Privacy Protection Agency are drafting. Bauer-Kahan told CalMatters she plans to put forward a stronger bill next year, adding, “We have strong anti-discrimination protections but under these systems we need more information.”
Like Wiener’s bill, Bauer-Kahan’s was subject to lobbying by opponents in the tech industry, including Google, Meta, Microsoft and OpenAI, which hired its first lobbyist ever in Sacramento this spring. Unlike Wiener’s bill, it also attracted opposition from nearly 100 companies from a wide range of industries, including Blue Shield of California, dating app company Bumble, biotech company Genentech, and pharmaceutical company Pfizer.
The failure of the AI discrimination bill is one reason there are still “gaping holes” in California’s AI regulation, according to Samantha Gordon, chief program officer at TechEquity, which lobbied in favor of the bill. Gordon, who co-organized a working group on AI with privacy, labor, and human rights groups, believes the state still needs legislation to address “ discrimination, disclosure, transparency, and which use cases deserve a ban because they have demonstrated an ability to harm people.”