California lawmakers have sent a bill to Gov. Gavin Newsom’s desk that would put new safety rules in place for companies developing artificial intelligence.SB 1047 requires California companies that are spending at least $100 million developing AI models to do safety testing to prevent major risks or harms. Experts have warned without guardrails, the models could eventually help bad actors create a biological weapon or carry out cyber-attacks to shut down the electric grid or melt down the banking system. (Earlier coverage in video above.)The Assembly approved the bill on a 48-16 vote on Wednesday. The bill cleared the state Senate Thursday on a 29-2 vote. “While the exact timing of these threats is uncertain, some of these threats could materialize in as little as a year,” Dan Hendrycks, an AI researcher, told reporters in a virtual news conference on Monday. “Product safety testing is a standard for many industries including manufacturers of cars, airplanes, prescription drugs and nuclear power plants.” The bill has frustrated some in the industry who worry the regulation could slow down the growing industry’s progress. That includes the developer of ChatGPT, OpenAI. The company has warned if the bill passes, it may be forced to move operations out of California. “I understand this is hardball politics, I’m used to that,” said State Senator Scott Wiener, D-San Francisco, who wrote the proposal. “Anytime we try to pass laws in the public interest, industry will threaten to move.” The issue has divided Democrats. A group of California members in the U.S. House of Representatives, including former Speaker Nancy Pelosi, sent a letter to Gov. Gavin Newsom earlier this month, urging him to reject the bill if it lands on his desk. “In short, we are very concerned about the effect this legislation could have on the innovation economy of California without any clear benefit for the public,” the group wrote. “High tech innovation is the economic engine that drives California’s prosperity.””Congress has been paralyzed when it comes to technology policy,” Wiener told reporters in response, noting Congress has not passed major tech regulations since the 1990s aside from the TikTok ban. “I don’t say this to bash Congress, but Congress has proven it’s not capable of passing strong technology policy.”Republican state lawmakers were also divided over the measure. Assemblyman Devon Mathis, R-Visalia, told KCRA 3 he plans on voting for the bill. “How do you create public trust when the guys who are controlling it are stonewalling regulation?” he said. But others have said they have issues with the bill.”There are some things government has a role to play in regulating and managing,” said Assemblyman Josh Hoover, R-Folsom. “But my concerns with this piece of legislation is that it just goes too far in that direction before we know what we’re dealing with.”The issue has also divided the tech industry overall. Meta’s Chief AI Scientist, Yann LeCun said in part in a post on X, “regulating would have apocalyptic consequences on the AI ecosystem.” Elon Musk on Monday night threw his support behind the bill. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he posted on X. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”Gov. Gavin Newsom has not publicly stated his position on the bill. “We dominate in this space, and I want to continue to dominate in this space, I don’t want to cede in this space to other states or other countries,” he said at an AI Summit he convened in May. “If we over-regulate, if we over-indulge, if we chase a shiny object, we could put ourselves in a perilous position. But at the same time, we have an obligation to lead.”The governor has until Sept. 30 to sign or veto the bill.See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter

California lawmakers have sent a bill to Gov. Gavin Newsom’s desk that would put new safety rules in place for companies developing artificial intelligence.

SB 1047 requires California companies that are spending at least $100 million developing AI models to do safety testing to prevent major risks or harms. Experts have warned without guardrails, the models could eventually help bad actors create a biological weapon or carry out cyber-attacks to shut down the electric grid or melt down the banking system.

(Earlier coverage in video above.)

The Assembly approved the bill on a 48-16 vote on Wednesday. The bill cleared the state Senate Thursday on a 29-2 vote.

“While the exact timing of these threats is uncertain, some of these threats could materialize in as little as a year,” Dan Hendrycks, an AI researcher, told reporters in a virtual news conference on Monday. “Product safety testing is a standard for many industries including manufacturers of cars, airplanes, prescription drugs and nuclear power plants.”

The bill has frustrated some in the industry who worry the regulation could slow down the growing industry’s progress. That includes the developer of ChatGPT, OpenAI. The company has warned if the bill passes, it may be forced to move operations out of California.

“I understand this is hardball politics, I’m used to that,” said State Senator Scott Wiener, D-San Francisco, who wrote the proposal. “Anytime we try to pass laws in the public interest, industry will threaten to move.”

The issue has divided Democrats. A group of California members in the U.S. House of Representatives, including former Speaker Nancy Pelosi, sent a letter to Gov. Gavin Newsom earlier this month, urging him to reject the bill if it lands on his desk.

“In short, we are very concerned about the effect this legislation could have on the innovation economy of California without any clear benefit for the public,” the group wrote. “High tech innovation is the economic engine that drives California’s prosperity.”

“Congress has been paralyzed when it comes to technology policy,” Wiener told reporters in response, noting Congress has not passed major tech regulations since the 1990s aside from the TikTok ban. “I don’t say this to bash Congress, but Congress has proven it’s not capable of passing strong technology policy.”

Republican state lawmakers were also divided over the measure.

Assemblyman Devon Mathis, R-Visalia, told KCRA 3 he plans on voting for the bill. “How do you create public trust when the guys who are controlling it are stonewalling regulation?” he said.

But others have said they have issues with the bill.

“There are some things government has a role to play in regulating and managing,” said Assemblyman Josh Hoover, R-Folsom. “But my concerns with this piece of legislation is that it just goes too far in that direction before we know what we’re dealing with.”

The issue has also divided the tech industry overall.

Meta’s Chief AI Scientist, Yann LeCun said in part in a post on X, “regulating [research and development] would have apocalyptic consequences on the AI ecosystem.”

Elon Musk on Monday night threw his support behind the bill.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he posted on X. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

Gov. Gavin Newsom has not publicly stated his position on the bill.

“We dominate in this space, and I want to continue to dominate in this space, I don’t want to cede in this space to other states or other countries,” he said at an AI Summit he convened in May. “If we over-regulate, if we over-indulge, if we chase a shiny object, we could put ourselves in a perilous position. But at the same time, we have an obligation to lead.”

The governor has until Sept. 30 to sign or veto the bill.

See more coverage of top California stories here | Download our app | Subscribe to our morning newsletter



Source link

author-sign