Will a bill regulating AI in California protect consumers or destroy the technology?

Will a bill regulating AI in California protect consumers or destroy the technology?


California is a world leader in artificial intelligence — which means we’re expected to help figure out how to regulate it. The state is considering several bills for these purposes, but none of these bills is garnering the most attention Senate Bill 1047. scale, Introduced by Senator Scott Wiener According to (D-San Francisco), companies building the biggest AI models must test and modify those models to avoid serious harm. Is this a necessary step to keep AI responsible, or an overreaction? Simon Last, co-founder of an AI-fueled company, and Paul Lekas, public policy chief at the Software & Information Industry Association, gave their perspectives.

This bill would help protect technology without hurting innovation

by Simon Last

As the co-founder of an AI-powered company, I have witnessed the amazing progress of artificial intelligence. Every day, I design products that use AI, and it is clear that these systems will become much more powerful over the next few years. We will see major advances in creativity and productivity, along with advances in science and medicine.

However, as AI systems become more sophisticated, we must also be mindful of their risks. Without proper precautions, AI can cause serious harm on an unprecedented scale – cyberattacks on critical infrastructure, the development of chemical, nuclear or biological weapons, automated crime and much more.

California’s SB 1047 strikes a balance between protecting public safety from such harms and supporting innovation, focusing on common-sense security requirements for some of the companies developing the most powerful AI systems. It includes whistleblower protections for employees who report security concerns at AI companies, and importantly, the bill is designed to support California’s incredible startup ecosystem.

SB 1047 would only affect companies that are building next-generation AI systems that cost more than $100 million to train. Based on industry best practices, the bill mandates safety testing and mitigation of predicted risks before these systems are released, as well as the ability to shut them down in an emergency. In cases where AI causes mass casualties or damages of at least $500 million, state attorneys general could sue companies to hold them liable.

These security standards will apply to the AI ​​“foundation model” on which startups build specialized products. Through this approach, we can more effectively mitigate risks across the industry without burdening small-scale developers. As a startup founder, I am confident that this bill will not impede our ability to build and grow.

Some critics argue that regulation should focus only on harmful uses of AI rather than the underlying technology. But this approach is misguided because it is already illegal to conduct cyberattacks or use biological weapons, for example. SB 1047 provides what is missing: a way to prevent harm before it happens. Product safety testing is standard for many industries, including manufacturers of cars, airplanes, and prescription drugs. Manufacturers of the largest AI systems should also be held to a similar standard.

Others claim that this legislation will drive businesses out of the state. This is absurd. The supply of talent and capital in California is nonexistent, and SB 1047 will not change the factors that attract companies to operate here. Also, this bill applies to foundation model developers doing business in California, regardless of where they are headquartered.

Technology leaders, including Meta’s Mark Zuckerberg and OpenAI’s Sam Altman, have turned to Congress to discuss AI regulation, warning of the technology’s potentially devastating impacts and even calling for restrictions on the use of AI. Ask for regulation. But expectations for action from Congress are low.

with 32 of Forbes’ top 50 AI companies Based in California, our state has a great responsibility to help the industry thrive. SB 1047 provides a framework for young companies to grow alongside larger companies while prioritizing public safety. By making smart policy choices now, state lawmakers and Governor Gavin Newsom can solidify California’s position as a global leader in responsible AI advancement.

Simon Last is the co-founder of San Francisco-based Notion.

These nearly impossible standards would force California to lose its edge in AI

by Paul Lekas

California is the cradle of American innovation. Over the years, many information and technology businesses, including my association, have worked for Californians by creating new products for consumers, improving public services, and powering the economy. Unfortunately, legislation passing through the California Legislature is threatening to undermine the most talented innovators and target marginal – or highly advanced – AI models.

This bill goes far beyond the stated focus of addressing real concerns about the safety of these models while ensuring California reaps the benefits of this technology. Instead of targeting predictable harms like using AI for predictive policing based on biased historical data or holding accountable those who use AI for nefarious purposes, SB 1047 would ultimately prevent developers from releasing AI models that could be customized to meet the needs of California consumers and businesses.

SB 1047 would do so by forcing the leaders of new AI technologies to do everything possible to anticipate and mitigate the ways in which their models might be misused and to prevent that misuse. This is simply not possible, especially when there are no universally accepted technical standards for measuring and mitigating frontier model risk.

If SB 1047 becomes law, California consumers will lose access to AI tools that are useful to them. It’s like stopping production of a prescription drug because someone took it illegally or overdosed. They will also lose access to AI tools designed to protect Californians from malicious activity enabled by other AI.

To be clear, the concerns about SB 1047 do not reflect a belief that AI should be promoted without meaningful oversight. There is bipartisan consensus that we need safeguards around AI to minimize the risk of misuse and address potential harm to public health and safety, civil rights, and other areas. States have taken the lead in enacting legislation to discourage the use of AI for wrongdoing. For example, Indiana, Minnesota, Texas, Washington, and California have enacted laws to prohibit the creation of deepfakes depicting intimate images of identifiable individuals and to restrict the use of AI in election advertising.

Congress is also considering safeguards to protect elections, privacy, national security and other concerns while maintaining America’s technological advantage. In fact, surveillance would best be handled in a coordinated manner at the federal level, as is being done through the AI ​​Security Institute launched at the National Institute of Standards and Technology, without civil and criminal liability. This approach recognizes that frontier model security requires massive resources that no single state, not even California, can muster.

So, while it is important for elected leaders to take steps to protect consumers, SB 1047 goes too far. It will force emerging and established companies to weigh nearly impossible standards to comply with against the value of doing business elsewhere. California could lose its edge in AI innovation. And AI developers outside the US who are not subject to the same transparency and accountability principles will have a stronger position, inevitably putting the privacy and security of US consumers at risk.

Paul Lekas ​​is Head of Global Public Policy and Government Affairs. Software and Information Industry Association. In Washington,


Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *