California Governor Gavin Newsom vetoed Senate Bill 1047, citing concerns it would stifle innovation without addressing real AI risks.
California Governor Gavin Newsom has vetoed an artificial intelligence (AI) bill that has been the subject of intense controversy. California Governor Gavin Newsom argues that the bill will stifle innovation and fail to safeguard the general public from the “real” hazards posed by the technology.
On September 30, California Governor Newsom exercised his veto power against Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This legislation has received substantial opposition from the software industry in Silicon Valley.
Despite the fact that tech companies were concerned that it would restrict innovation, it suggested mandatory safety testing of artificial intelligence models and other guardrails.
Newsom claimed in a statement on September 29 that the bill overemphasized regulation of the top artificial intelligence companies without sufficiently safeguarding the public from the “real” risks posed by the developing technology.
“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Senator Scott Wiener, a Democrat from San Francisco, drafted Senate Bill 1047, which would also require developers in the state of California, including well-known companies like OpenAI, the creator of ChatGPT, Meta, and Google, to build a “kill switch” for their artificial intelligence models and submit plans for minimizing extreme risks.
Should the bill come into effect, the state attorney general could potentially take legal action against AI developers. If models such as an AI grid takeover posed an ongoing threat, then this would be the case.
California Governor Newsom stated that he had requested assistance from the most prominent artificial intelligence safety specialists in the world in order to assist California in “developing workable guardrails” that center on the creation of a “science-based trajectory analysis”.
He went on to say that he had given orders to state agencies to broaden their evaluation of the dangers posed by the possibility of catastrophic events resulting from the development of artificial intelligence.
Despite exercising his veto power against Senate Bill 1047, Newsom asserted the need to implement sufficient safety measures for artificial intelligence. He also stated that regulators cannot afford to “wait for a major catastrophe to occur before taking action to protect the public”.
Over the past 30 days, California Governor Newsom’s administration has signed over eighteen measures pertaining to the regulation of artificial intelligence. Legislators, powerful advisors, and significant tech companies did not favor the bill in the period preceding Newsom’s decision.
Firms such as OpenAI and House Speaker Nancy Pelosi have expressed their belief that it will greatly impede the development of artificial intelligence.
Although the measure primarily targets models of a specific cost and size models that cost more than 100 million dollars Neil Chilson, who is in charge of artificial intelligence policy at the Abundance Institute, issued a warning that its reach may easily expand to crack down on development companies that are smaller in size.Some, however, are willing to pay the bill.
Billionaire Elon Musk is one of the few tech CEOs who support the measure and broader AI restrictions in general. Musk is now working on his own artificial intelligence model, which he has dubbed “Grok”.
Musk made this statement in a post on X on August 26. He stated that “California should probably pass the SB 1047 AI safety bill,” although he also admitted that supporting the bill was a complicated decision.