Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | |
---|---|
California State Legislature | |
Full name | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
Introduced | February 7, 2024 |
Assembly voted | August 28, 2024 (48–16) |
Senate voted | August 29, 2024 (30–9) |
Sponsor(s) | Scott Wiener |
Governor | Gavin Newsom |
Bill | SB 1047 |
Website | Bill Text |
Status: Not passed (Vetoed by Governor on September 29, 2024) |
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". [1] Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. [2] SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. [3] The bill creates protections for whistleblowers [4] and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to express concern about existential risks associated with increasingly powerful AI systems. [5] [6] The plausibility of this threat is widely debated. [7] AI regulation is also sometimes advocated for in order to prevent bias and privacy violations. [6] However, it has been criticized as possibly leading to regulatory capture by large AI companies like OpenAI, in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general. [6]
In May 2023, hundreds of tech executives and AI researchers [8] signed a statement on AI risk of extinction, which read "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It received signatures from the two most-cited AI researchers, [9] [10] [11] Geoffrey Hinton and Yoshua Bengio, along with industry figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei. [12] [13] Many other experts thought that existential concerns were overblown and unrealistic, as well as a distraction from the near-term harms of AI, for example discriminatory automated decision making. [14] Famously, Sam Altman strongly requested AI regulation from Congress at a hearing the same month. [6] [15] Several technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit. [16] [17]
Governor Newsom of California and President Biden issued executive orders on artificial intelligence in 2023. [18] [19] [20] State Senator Wiener said SB 1047 draws heavily on the Biden executive order, and is motivated by the absence of unified federal legislation on AI safety. [21] California has previously legislated on tech issues, including consumer privacy and net neutrality, in the absence of action by Congress. [22] [23]
The bill was originally drafted by Dan Hendrycks, co-founder of the Center for AI Safety, who has previously argued that evolutionary pressures on AI could lead to "a pathway towards being supplanted as the Earth's dominant species." [24] [25] The center issued a statement in May 2023 co-signed by Elon Musk and hundreds of other business leaders stating that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." [26]
State Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023. [27] [28] [29] SB 1047 was introduced by Wiener on February 7, 2024. [30] [31]
On May 21, SB 1047 passed the Senate 32-1. [32] [33] The bill was significantly amended by Wiener on August 15, 2024 in response to industry advice. [34] Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury. [35] [36]
On August 28, the bill passed the State Assembly 48-16. Then, due to the amendments, the bill was once again voted on by the Senate, passing 30-9. [37] [38]
On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses. [39]
Prior to model training, developers of covered models and derivatives are required to submit a certification, subject to auditing, of mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Safeguards to reduce risk include the ability to shut down the model, [4] which has been variously described as a "kill switch" [40] and "circuit breaker". [41] Whistleblowing provisions protect employees who report safety problems and incidents. [4]
SB 1047 would also create a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources. [35]
SB 1047 covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million. [2] [42] If a covered model is fine-tuned using more than $10 million, the resulting model is also covered. [36]
Critical harms are defined with respect to four categories: [1] [43]
SB 1047 would require developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided. [35] The Government Operations Agency would review the results of safety tests and incidents, and issue guidance, standards, and best practices. [35] The bill creates a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is composed of 9 members. [42] [ needs update ]
Proponents of the bill describe its provisions as simple and narrowly-focused, with Sen. Scott Weiner describing it as a "light-touch, basic safety bill". [45] This has been disputed by critics of the bill, who describe the bill's language as vague and criticize it as consolidating power in the largest AI companies at the expense of smaller ones. [45] Proponents responded that the bill only applies to models trained using more than 1026 FLOPS and with over $100 millions, or fine-tuned with more than $10 millions, and that the threshold could be increased if needed. [46]
The penalty of perjury has been a subject of debate, and was eventually removed through an amendment. The scope of the "kill switch" requirement had also been reduced, following concerns from open-source developers. Contention also happened on the usage of the term "reasonable assurance", eventually replaced after the amendment by "reasonable care". Critics argued that "reasonable care" standard imposes an excessive burden by requiring confidence that models could not be used to cause catastrophic harm, while proponents argued that the standard of "reasonable care" does not imply certainty and is a well-established legal standard that already applies to AI developers under existing law. [46]
Supporters of the bill include Turing Award recipients Yoshua Bengio [47] and Geoffrey Hinton, [48] Elon Musk, [49] Bill de Blasio, [50] Kevin Esvelt, [51] Dan Hendrycks, [52] Vitalik Buterin, [53] OpenAI whistleblowers Daniel Kokotajlo [44] and William Saunders, [54] Lawrence Lessig, [55] Sneha Revanur, [56] Stuart Russell, [55] Jan Leike, [57] actors Mark Ruffalo, Sean Astin, and Rosie Perez, [58] Scott Aaronson, [59] and Max Tegmark. [60] The Center for AI Safety, Economic Security California [61] and Encode Justice [62] are sponsors. Yoshua Bengio writes that the bill is a major step towards testing and safety measures for "AI systems beyond a certain level of capability [that] can pose meaningful risks to democracies and public safety." [63] Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market. He also argued that the opposition to the bill from some companies is "straight out of Big Tech's playbook." [60] The Los Angeles Times editorial board has also written in support of the bill. [64] The labor union SAG-AFTRA and two women's groups, the National Organization for Women and Fund Her, have sent support letters to Governor Newsom. [65] Over 120 Hollywood celebrities, including Mark Hamill, Jane Fonda, and J. J. Abrams, signed a statement in support of the bill. [66]
Andrew Ng, Fei-Fei Li, [67] Russell Wald, [68] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun, along with U.S. Congressmembers Nancy Pelosi, Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa have come out against the legislation. [6] [69] [70] Andrew Ng argues specifically that there are better more targeted regulatory approaches, such as targeting deepfake pornography, watermarking generated materials, and investing in red teaming and other security measures. [63] University of California and Caltech researchers have also written open letters in opposition. [69]
The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress, [a] the Computer & Communications Industry Association [b] and TechNet. [c] [2] Companies including Meta [74] and OpenAI [75] are opposed to or have raised concerns about the bill, while Google, [74] Microsoft and Anthropic [60] have proposed substantial amendments. [3] However, Anthropic has announced its support for an amended version of California's Senate Bill 1047 while mentioning that some aspects of the bill which seem concerning or ambiguous to them. [76] Several startup founder and venture capital organizations are opposed to the bill, for example, Y Combinator, [77] [78] Andreessen Horowitz, [79] [80] [81] Context Fund [82] [83] and Alliance for the Future. [84]
After the bill was amended, Anthropic CEO Dario Amodei wrote that "the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us." [85] Amodei also commented, "There were some companies talking about moving operations out of California. In fact, the bill applies to doing business in California or deploying models in California... Anything about 'Oh, we're moving our headquarters out of California...' That's just theater. That's just negotiating leverage. It bears no relationship to the actual content of the bill." [86] xAI CEO Elon Musk wrote, "I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public." [87]
On September 9, 2024, at least 113 current and former employees of AI companies OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom in support of SB 1047. [88] [89]
Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, the Chief AI Officer of Meta, has suggested the bill would kill open source AI models. [63] Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available. [90] [91] The AI Alliance has written in opposition to the bill, among other open-source organizations. [69] In contrast, Creative Commons co-founder Lawrence Lessig has written that SB 1047 would make open source AI models safer and more popular with developers, since both harm and liability for that harm are less likely. [41]
The Artificial Intelligence Policy Institute, a pro-regulation AI think tank, [92] [93] ran three polls of California respondents on whether they supported or opposed SB 1047.
Support | Oppose | Not sure | Margin of error | |
---|---|---|---|---|
July 9, 2024 [94] [95] | 59% | 20% | 22% | ±5.2% |
August 4–5, 2024 [96] [97] | 65% | 25% | 10% | ±4.9% |
August 25–26, 2024 [98] [99] [d] | 70% | 13% | 17% | ±4.2% |
A YouGov poll commissioned by the Economic Security Project found that 78% of registered voters across the United States supported SB 1047, and 80% thought that Governor Newsom should sign the bill. [100]
A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating societal-scale risk and a sponsor of the bill, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations. [101] [102] [103] [104]
On the other hand, the California Chamber of Commerce has conducted its own poll, showing that 28 % of respondents supported the bill, 46 % opposed, and 26 % were neutral. The framing of the question has however been described as "badly biased". [e] [93]
Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, citing concerns over the bill's regulatory framework targeting only large AI models based on their computational size, while not taking into account whether the models are deployed in high-risk environments. [105] [106] Newsom emphasized that this approach could create a false sense of security, overlooking smaller models that might present equally significant risks. [105] [107] He acknowledged the need for AI safety protocols [105] [108] but stressed the importance of adaptability in regulation as AI technology continues to evolve rapidly. [105] [109]
Governor Newsom also committed to working with technology experts, federal partners, and academic institutions, including Stanford University's Human-Centered AI (HAI) Institute, led by Dr. Fei-Fei Li. He announced plans to collaborate with these entities to advance responsible AI development, aiming to protect the public while fostering innovation. [105] [110]
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice President, Chief AI Scientist at Meta.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Mila - Quebec AI Institute is a research institute in Montreal, Quebec, focusing mainly on machine learning research. Approximately 1000 students and researchers and 100 faculty members, were part of Mila in 2022. Along with Alberta's Amii and Toronto's Vector Institute, Mila is part of the Pan-Canadian Artificial Intelligence Strategy.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.
The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.
Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.
Sneha Revanur is an Indian-American activist. She is the founder and president of Encode Justice, a youth organization advocating for the global regulation of artificial intelligence. In 2023, she was described by Politico as the "Greta Thunberg of AI".
Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.
P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.