California’s AI safety bill is controversial. Making it a law is the best way to fix it

California’s AI safety bill is controversial. Making it a law is the best way to fix it


On August 29, the California Legislature passed Senate Bill 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — and sent it to Governor Gavin Newsom for signature. Newsom’s choice, which has to be completed by September 30, is twofold: kill it or make it law.

Recognizing the potential harms that advanced AI can cause, SB 1047 requires technology developers to integrate safeguards as they develop and deploy models, called “covered models” in the bill. California’s Attorney General can enforce these requirements by bringing civil action against parties who fail to exercise “reasonable care” that 1) their models will not cause catastrophic harm, or 2) their models can be shut down in the event of an emergency.

Several major AI companies are opposing the bill, either individually or through trade associations. Their objections include concerns that the definition of covered models is not flexible enough to take into account technological advances, that it is unfair to hold them responsible for harmful applications developed by others, and that overall the bill will stifle innovation and hinder smaller startup companies without resources dedicated to compliance.

These objections are not trivial; they are considerable and it is quite possible that the bill will have to be amended further. But the governor must sign or approve it because a veto would mean that no regulation of AI is acceptable now and possibly until or unless catastrophic harm occurs. Such a situation is not right for governments to take on such technology.

The bill’s author, Senator Scott Wiener (D-San Francisco), negotiated with the AI ​​industry on several versions of the bill before its final legislative passage. At least one major AI firm – Anthropic – asked for specific and significant changes to the text, many of which were incorporated into the final bill. Since the Legislature passed it, Anthropic’s CEO where is it that its “benefits likely outweigh its costs … (though) some aspects of the bill (still) appear worrying or unclear.” Public evidence to date suggests that most other AI companies decided to oppose the bill on grounds of principleinstead specific efforts must be made to modify it.

What should we make of such opposition, especially when the leaders of some of these companies have publicly expressed concern about the potential dangers of advanced AI? For example, in 2023, the CEOs of OpenAI and Google’s DeepMind, signed an open letter In which the threats of AI were compared to those of pandemics and nuclear war.

A reasonable conclusion is that they, unlike Anthropic, oppose any kind of mandatory regulation. They want to reserve the right to decide for themselves whether the risks of an activity or research effort or any other deployed model outweigh its benefits. More importantly, they want those who develop applications based on their covered models to be fully responsible for risk mitigation. Recent court cases have suggested Parents who put guns in their kids’ hands should take some legal responsibility for the consequences. Why should AI companies be treated differently?

AI companies want the public to give them a free hand despite the obvious conflict of interest – for-profit companies should not be trusted to make decisions that could hinder their profit potential.

We’ve been here before. In November 2023, OpenAI’s board fired its CEO because it found that, under his direction, the company was heading down a dangerous technological path. Within a matter of days, various OpenAI stakeholders were able to reverse that decision, reinstate him, and oust the board members who had advocated for his ouster. The irony is that OpenAI was specifically structured to allow the board to act as it did — regardless of the company’s ability to make a profit, the board had to ensure that the public interest came first.

If SB 1047 is vetoed, anti-regulation forces will claim a victory that demonstrates the wisdom of their position, and they will have little incentive to work on alternative legislation. Having no significant regulation is to their advantage, and they will use the veto to maintain that status quo.

Alternatively, the governor could make SB 1047 law, including an open invitation to its opponents to help fix its specific flaws. With a law they consider imperfect, opponents of the bill would have a great incentive to work to fix it, and to work in good faith. But the basic approach would be to have the industry, not the government, put forward its view of what it considers to be reasonable care about the safety features of its advanced models. The government’s role would be to make sure that the industry does what the industry itself says it should do.

The consequences of eliminating SB 1047 and maintaining the status quo are significant: companies can continue to advance their technologies without hindrance. The consequences of accepting the imperfect bill would be a meaningful step toward a better regulatory environment for all concerned. This would be the beginning, not the end, of the AI ​​regulatory game. This first step sets the tone for what is to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

Herbert Lynn is a senior research scholar at the Center for International Security and Cooperation at Stanford University and a fellow at the Hoover Institution. He is the author of “Cyber ​​Threats and Nuclear Weapons.”,


Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *