Gadget

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters


California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.”

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15.



Source link

Washington Digital News

Leave a Reply