Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Last updated
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Seal of California.svg
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Assembly votedAugust 28, 2024 (48–16)
Senate votedAugust 29, 2024 (30–9)
Sponsor(s) Scott Wiener
Governor Gavin Newsom
BillSB 1047
Website Bill Text
Status: Not passed
(Vetoed by Governor on September 29, 2024)

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". [1] Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. [2] SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. [3] The bill creates protections for whistleblowers [4] and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.

Contents

Background

The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to express concern about existential risks associated with increasingly powerful AI systems. [5] [6] The plausibility of this threat is widely debated. [7] AI regulation is also sometimes advocated for in order to prevent bias and privacy violations. [6] However, it has been criticized as possibly leading to regulatory capture by large AI companies like OpenAI, in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general. [6]

In May 2023, hundreds of tech executives and AI researchers [8] signed a statement on AI risk of extinction, which read "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It received signatures from the two most-cited AI researchers, [9] [10] [11] Geoffrey Hinton and Yoshua Bengio, along with industry figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei. [12] [13] Many other experts thought that existential concerns were overblown and unrealistic, as well as a distraction from the near-term harms of AI, for example discriminatory automated decision making. [14] Famously, Sam Altman strongly requested AI regulation from Congress at a hearing the same month. [6] [15] Several technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit. [16] [17]

Governor Newsom of California and President Biden issued executive orders on artificial intelligence in 2023. [18] [19] [20] State Senator Wiener said SB 1047 draws heavily on the Biden executive order, and is motivated by the absence of unified federal legislation on AI safety. [21] California has previously legislated on tech issues, including consumer privacy and net neutrality, in the absence of action by Congress. [22] [23]

History

The bill was originally drafted by Dan Hendrycks, co-founder of the Center for AI Safety, who has previously argued that evolutionary pressures on AI could lead to "a pathway towards being supplanted as the Earth's dominant species." [24] [25] The center issued a statement in May 2023 co-signed by Elon Musk and hundreds of other business leaders stating that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." [26]

State Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023. [27] [28] [29] SB 1047 was introduced by Wiener on February 7, 2024. [30] [31]

On May 21, SB 1047 passed the Senate 32-1. [32] [33] The bill was significantly amended by Wiener on August 15, 2024 in response to industry advice. [34] Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury. [35] [36]

On August 28, the bill passed the State Assembly 48-16. Then, due to the amendments, the bill was once again voted on by the Senate, passing 30-9. [37] [38]

On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses. [39]

Provisions

Prior to model training, developers of covered models and derivatives are required to submit a certification, subject to auditing, of mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Safeguards to reduce risk include the ability to shut down the model, [4] which has been variously described as a "kill switch" [40] and "circuit breaker". [41] Whistleblowing provisions protect employees who report safety problems and incidents. [4]

SB 1047 would also create a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources. [35]

Covered models

SB 1047 covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million. [2] [42] If a covered model is fine-tuned using more than $10 million, the resulting model is also covered. [36]

Critical harms

Critical harms are defined with respect to four categories: [1] [43]

Compliance and supervision

SB 1047 would require developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided. [35] The Government Operations Agency would review the results of safety tests and incidents, and issue guidance, standards, and best practices. [35] The bill creates a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is composed of 9 members. [42] [ needs update ]

Reception

Debate

Proponents of the bill describe its provisions as simple and narrowly-focused, with Sen. Scott Weiner describing it as a "light-touch, basic safety bill". [45] This has been disputed by critics of the bill, who describe the bill's language as vague and criticize it as consolidating power in the largest AI companies at the expense of smaller ones. [45] Proponents responded that the bill only applies to models trained using more than 1026 FLOPS and with over $100 millions, or fine-tuned with more than $10 millions, and that the threshold could be increased if needed. [46]

The penalty of perjury has been a subject of debate, and was eventually removed through an amendment. The scope of the "kill switch" requirement had also been reduced, following concerns from open-source developers. Contention also happened on the usage of the term "reasonable assurance", eventually replaced after the amendment by "reasonable care". Critics argued that "reasonable care" standard imposes an excessive burden by requiring confidence that models could not be used to cause catastrophic harm, while proponents argued that the standard of "reasonable care" does not imply certainty and is a well-established legal standard that already applies to AI developers under existing law. [46]

Support and opposition

Supporters of the bill include Turing Award recipients Yoshua Bengio [47] and Geoffrey Hinton, [48] Elon Musk, [49] Bill de Blasio, [50] Kevin Esvelt, [51] Dan Hendrycks, [52] Vitalik Buterin, [53] OpenAI whistleblowers Daniel Kokotajlo [44] and William Saunders, [54] Lawrence Lessig, [55] Sneha Revanur, [56] Stuart Russell, [55] Jan Leike, [57] actors Mark Ruffalo, Sean Astin, and Rosie Perez, [58] Scott Aaronson, [59] and Max Tegmark. [60] The Center for AI Safety, Economic Security California [61] and Encode Justice [62] are sponsors. Yoshua Bengio writes that the bill is a major step towards testing and safety measures for "AI systems beyond a certain level of capability [that] can pose meaningful risks to democracies and public safety." [63] Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market. He also argued that the opposition to the bill from some companies is "straight out of Big Tech's playbook." [60] The Los Angeles Times editorial board has also written in support of the bill. [64] The labor union SAG-AFTRA and two women's groups, the National Organization for Women and Fund Her, have sent support letters to Governor Newsom. [65] Over 120 Hollywood celebrities, including Mark Hamill, Jane Fonda, and J. J. Abrams, signed a statement in support of the bill. [66]

Andrew Ng, Fei-Fei Li, [67] Russell Wald, [68] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun, along with U.S. Congressmembers Nancy Pelosi, Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa have come out against the legislation. [6] [69] [70] Andrew Ng argues specifically that there are better more targeted regulatory approaches, such as targeting deepfake pornography, watermarking generated materials, and investing in red teaming and other security measures. [63] University of California and Caltech researchers have also written open letters in opposition. [69]

Industry

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress, [a] the Computer & Communications Industry Association [b] and TechNet. [c] [2] Companies including Meta [74] and OpenAI [75] are opposed to or have raised concerns about the bill, while Google, [74] Microsoft and Anthropic [60] have proposed substantial amendments. [3] However, Anthropic has announced its support for an amended version of California's Senate Bill 1047 while mentioning that some aspects of the bill which seem concerning or ambiguous to them. [76] Several startup founder and venture capital organizations are opposed to the bill, for example, Y Combinator, [77] [78] Andreessen Horowitz, [79] [80] [81] Context Fund [82] [83] and Alliance for the Future. [84]

After the bill was amended, Anthropic CEO Dario Amodei wrote that "the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us." [85] Amodei also commented, "There were some companies talking about moving operations out of California. In fact, the bill applies to doing business in California or deploying models in California... Anything about 'Oh, we're moving our headquarters out of California...' That's just theater. That's just negotiating leverage. It bears no relationship to the actual content of the bill." [86] xAI CEO Elon Musk wrote, "I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public." [87]

On September 9, 2024, at least 113 current and former employees of AI companies OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom in support of SB 1047. [88] [89]

Open source developers

Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, the Chief AI Officer of Meta, has suggested the bill would kill open source AI models. [63] Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available. [90] [91] The AI Alliance has written in opposition to the bill, among other open-source organizations. [69] In contrast, Creative Commons co-founder Lawrence Lessig has written that SB 1047 would make open source AI models safer and more popular with developers, since both harm and liability for that harm are less likely. [41]

Public opinion polls

The Artificial Intelligence Policy Institute, a pro-regulation AI think tank, [92] [93] ran three polls of California respondents on whether they supported or opposed SB 1047.

SupportOpposeNot sureMargin of error
July 9, 2024 [94] [95] 59%20%22%±5.2%
August 4–5, 2024 [96] [97] 65%25%10%±4.9%
August 25–26, 2024 [98] [99] [d] 70%13%17%±4.2%

A YouGov poll commissioned by the Economic Security Project found that 78% of registered voters across the United States supported SB 1047, and 80% thought that Governor Newsom should sign the bill. [100]

A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating societal-scale risk and a sponsor of the bill, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations. [101] [102] [103] [104]

On the other hand, the California Chamber of Commerce has conducted its own poll, showing that 28 % of respondents supported the bill, 46 % opposed, and 26 % were neutral. The framing of the question has however been described as "badly biased". [e] [93]

SB 1047 Governor Veto

Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, citing concerns over the bill's regulatory framework targeting only large AI models based on their computational size, while not taking into account whether the models are deployed in high-risk environments. [105] [106] Newsom emphasized that this approach could create a false sense of security, overlooking smaller models that might present equally significant risks. [105] [107] He acknowledged the need for AI safety protocols [105] [108] but stressed the importance of adaptability in regulation as AI technology continues to evolve rapidly. [105] [109]

Governor Newsom also committed to working with technology experts, federal partners, and academic institutions, including Stanford University's Human-Centered AI (HAI) Institute, led by Dr. Fei-Fei Li. He announced plans to collaborate with these entities to advance responsible AI development, aiming to protect the public while fostering innovation. [105] [110]

See also

Notes

  1. whose corporate partners include Amazon, Apple, Google and Meta [71]
  2. whose members include Amazon, Apple, Google and Meta [72]
  3. whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI [73]
  4. Question asked by the Artificial Intelligence Policy Institute's poll in August 2024: "Some policy makers are proposing a law in California, Senate Bill 1047, which would require that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they did not take appropriate precautions." [93]
  5. Question asked in the California Chamber of Commerce's poll: "Lawmakers in Sacramento have proposed a new state law—SB 1047—that would create a new California state regulatory agency to determine how AI models can be developed. This new law would require small startup companies to potentially pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats. Some say burdensome regulations like SB 1047 would potentially lead companies to move out of state or out of the country, taking investment and jobs away from California. Given everything you just read, do you support or oppose a proposal like SB 1047?" [93]

Related Research Articles

<span class="mw-page-title-main">Geoffrey Hinton</span> British computer scientist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Yann LeCun</span> French computer scientist (born 1960)

Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice President, Chief AI Scientist at Meta.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

<span class="mw-page-title-main">Mila (research institute)</span> Research laboratory in Montreal, Canada

Mila - Quebec AI Institute is a research institute in Montreal, Quebec, focusing mainly on machine learning research. Approximately 1000 students and researchers and 100 faculty members, were part of Mila in 2022. Along with Alberta's Amii and Toronto's Vector Institute, Mila is part of the Pan-Canadian Artificial Intelligence Strategy.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

Sneha Revanur is an Indian-American activist. She is the founder and president of Encode Justice, a youth organization advocating for the global regulation of artificial intelligence. In 2023, she was described by Politico as the "Greta Thunberg of AI".

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

References

  1. 1 2 Bauer-Kahan, Rebecca. "ASSEMBLY COMMITTEE ON PRIVACY AND CONSUMER PROTECTION" (PDF). California Assembly. State of California. Retrieved 1 August 2024.
  2. 1 2 3 Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  3. 1 2 Rana, Preetika (2024-08-07). "AI Companies Fight to Stop California Safety Rules". The Wall Street Journal. Retrieved 2024-08-08.
  4. 1 2 3 Thibodeau, Patrick (2024-06-06). "Catastrophic AI risks highlight need for whistleblower laws". TechTarget. Retrieved 2024-08-06.
  5. Henshall, Will (2023-09-07). "Yoshua Bengio". TIME.
  6. 1 2 3 4 5 Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
  7. De Vynck, Gerrit (20 May 2023). "The debate over whether AI will destroy us is dividing Silicon Valley". The Washington Post .
  8. Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say". Washington Post. ISSN   0190-8286 . Retrieved 2024-07-03.
  9. Hinton, Geoffrey (2024-04-17). "Yoshua Bengio". TIME. Retrieved 2024-09-03.
  10. "World Scientists Citation Rankings 2024". AD Scientific Index. Retrieved 2024-09-03.
  11. "Profiles". Google Scholar. Retrieved 2024-09-02.
  12. Abdul, Geneva (2023-05-30). "Risk of extinction by AI should be global priority, say experts". The Guardian. ISSN   0261-3077 . Retrieved 2024-08-30.
  13. "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-08-30.
  14. "Artificial intelligence could lead to extinction, experts warn". BBC News. 2023-05-30. Retrieved 2024-08-30.
  15. Zorthian, Julia (2023-05-16). "OpenAI CEO Sam Altman Agrees AI Must Be Regulated". TIME. Retrieved 2024-08-30.
  16. Milmo, Dan (2023-11-03). "Tech firms to allow vetting of AI tools, as Musk warns all human jobs threatened". The Guardian. Retrieved 2024-08-12.
  17. Browne, Ryan (2024-05-21). "Tech giants pledge AI safety commitments — including a 'kill switch' if they can't mitigate risks". CNBC. Retrieved 2024-08-12.
  18. "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  19. "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  20. Riquelmy, Alan (2024-02-08). "California lawmaker aims to put up guardrails for AI development". Courthouse News Service. Retrieved 2024-08-04.
  21. Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  22. Piper, Kelsey (2024-08-29). "We spoke with the architect behind the notorious AI safety bill". Vox. Retrieved 2024-09-04.
  23. Lovely, Garrison (2024-08-15). "California's AI Safety Bill Is a Mask-Off Moment for the Industry". The Nation. Retrieved 2024-09-04.
  24. Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-08-30.
  25. Hendrycks, Dan (2023-05-31). "The Darwinian Argument for Worrying About AI". Time. Retrieved 2024-10-01.
  26. "Prominent AI leaders warn of 'risk of extinction' from new technology". Los Angeles Times. 2023-05-31. Retrieved 2024-08-30.
  27. Perrigo, Billy (2023-09-13). "California Bill Proposes Regulating AI at State Level". TIME. Retrieved 2024-08-12.
  28. David, Emilia (2023-09-14). "California lawmaker proposes regulation of AI models". The Verge. Retrieved 2024-08-12.
  29. "Senator Wiener Introduces Safety Framework in Artificial Intelligence Legislation". Senator Scott Wiener. 2023-09-13. Retrieved 2024-08-12.
  30. "California's SB-1047: Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act". DLA Piper. Retrieved 2024-08-30.
  31. Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-08-30.
  32. Hendrycks, Dan (2024-08-27). "California's Draft AI Law Would Protect More than Just People". TIME. Retrieved 2024-08-30.
  33. "California SB1047 | 2023-2024 | Regular Session". LegiScan. Retrieved 2024-08-30.
  34. Zeff, Maxwell (2024-08-15). "California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic". TechCrunch. Retrieved 2024-08-23.
  35. 1 2 3 4 Calvin, Nathan (August 15, 2024). "SB 1047 August 15 Author Amendments Overview". safesecureai.org. Retrieved 2024-08-16.
  36. 1 2 "Senator Wiener's Groundbreaking Artificial Intelligence Bill Advances To The Assembly Floor With Amendments Responding To Industry Engagement". Senator Scott Wiener. 2024-08-16. Retrieved 2024-08-17.
  37. writer, Troy Wolverton | Examiner staff (2024-08-29). "Legislature passes Wiener's controversial AI safety bill". San Francisco Examiner. Retrieved 2024-08-30.
  38. "Bill Votes - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act". leginfo.legislature.ca.gov. Retrieved 2024-09-04.
  39. Lee, Wendy (2024-09-29). "Gov. Gavin Newsom vetoes AI safety bill opposed by Silicon Valley". Los Angeles Times. Retrieved 2024-09-29.
  40. Times, Financial (2024-06-07). "Outcry from big AI firms over California AI "kill switch" bill". Ars Technica. Retrieved 2024-08-30.
  41. 1 2 Lessig, Lawrence (2024-08-30). "Big Tech Is Very Afraid of a Very Modest AI Safety Bill". The Nation. Retrieved 2024-09-03.
  42. 1 2 "07/01/24 - Assembly Judiciary Bill Analysis". California Legislative Information.
  43. "Analysis of the 7/3 Revision of SB 1047". Context Fund.
  44. 1 2 Johnson, Khari (2024-08-12). "Why Silicon Valley is trying so hard to kill this AI bill in California". CalMatters. Retrieved 2024-08-12.
  45. 1 2 Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
  46. 1 2 "Misrepresentations of California's AI safety bill". Brookings. Retrieved 2024-10-02.
  47. Bengio, Yoshua. "Yoshua Bengio: California's AI safety bill will protect consumers and innovation". Fortune. Retrieved 2024-08-17.
  48. Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  49. Coldewey, Devin (2024-08-26). "Elon Musk unexpectedly offers support for California's AI bill". TechCrunch. Retrieved 2024-08-27.
  50. de Blasio, Bill (2024-08-24). "X post". X.
  51. Riquelmy, Alan (2024-08-14). "California AI regulation bill heads to must-pass hearing". Courthouse News Service. Retrieved 2024-08-15.
  52. Metz, Cade (2024-08-14). "California A.I. Bill Causes Alarm in Silicon Valley". New York Times. Retrieved 2024-08-22.
  53. Jamal, Nynu V. (2024-08-27). "California's Bold AI Safety Bill: Buterin, Musk Endorse, OpenAI Wary". Coin Edition. Retrieved 2024-08-27.
  54. Zeff, Maxwell (2024-08-23). "'Disappointed but not surprised': Former employees speak on OpenAI's opposition to SB 1047". TechCrunch. Retrieved 2024-08-23.
  55. 1 2 Pillay, Tharin (2024-08-07). "Renowned Experts Pen Support for California's Landmark AI Safety Bill". TIME. Retrieved 2024-08-08.
  56. "Assembly Standing Committee on Privacy and Consumer Protection". CalMatters. Retrieved 2024-08-08.
  57. "Dozens of AI workers buck their employers, sign letter in support of Wiener AI bill". The San Francisco Standard. 2024-09-09. Retrieved 2024-09-10.
  58. Korte, Lara; Gardiner, Dustin (2024-09-17). "Act natural". Politico.
  59. "Call to Lead". Call to Lead. Retrieved 2024-09-10.
  60. 1 2 3 Samuel, Sigal (2024-08-05). "It's practically impossible to run a big AI company ethically". Vox. Retrieved 2024-08-06.
  61. DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  62. Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  63. 1 2 3 Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-07-30.
  64. The Times Editorial Board (2024-08-22). "Editorial: Why California should lead on AI regulation". Los Angeles Times. Retrieved 2024-08-23.
  65. Lovely, Garrison (2024-09-12). "Actors union and women's groups push Gavin Newsom to sign AI safety bill". The Verge. Retrieved 2024-09-12.
  66. Lee, Wendy (2024-09-24). "Mark Hamill, Jane Fonda, J.J. Abrams urge Gov. Newsom to sign AI safety bill". LA Times. Retrieved 2024-09-24.
  67. Li, Fei-Fei. "'The Godmother of AI' says California's well-intended AI bill will harm the U.S. ecosystem". Fortune. Retrieved 2024-08-08.
  68. "Groundswell of Opposition to CA's AI Bill as it Nears Vote". Pirate Wires. August 13, 2024. Retrieved 2024-11-12.
  69. 1 2 3 "SB 1047 Impacts Analysis". Context Fund.
  70. "Assembly Judiciary Committee 2024-07-02". California State Assembly.
  71. "Corporate Partners". Chamber of Progress. 16 May 2022.
  72. "Members". Computer & Communications Industry Association.
  73. "Members". TechNet.
  74. 1 2 Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  75. Zeff, Maxwell (2024-08-21). "OpenAI's opposition to California's AI bill 'makes no sense,' says state senator". TechCrunch. Retrieved 2024-08-23.
  76. Waters, John K. (2024-08-26). "Anthropic Announces Cautious Support for New California AI Regulation Legislation -". Campus Technology. Retrieved 2024-11-12.
  77. "Little Tech Brings a Big Flex to Sacramento". Politico. 21 June 2024.
  78. "Proposed California law seeks to protect public from AI catastrophes". The Mercury News. 25 July 2024.
  79. "California's Senate Bill 1047 - What You Need to Know". Andreessen Horowitz.
  80. Midha, Anjney (25 July 2024). "California's AI Bill Undermines the Sector's Achievements". Financial Times.
  81. "Senate Bill 1047 will crush AI innovation in California". Orange County Register. 10 July 2024.
  82. "AI Startups Push to Limit or Kill California Public Safety Bill". Bloomberg Law.
  83. "The Batch: Issue 257". Deeplearning.ai. 10 July 2024.
  84. "The AI Safety Fog of War". Politico. 2024-05-02.
  85. "Anthropic says California AI bill's benefits likely outweigh costs". Reuters. 2024-08-23. Retrieved 2024-08-23.
  86. Smith, Noah (2024-08-29). "Anthropic CEO Dario Amodei on AI's Moat, Risk, and SB 1047". "ECON 102" with Noah Smith and Erik Torenberg (Podcast). Event occurs at 55:13. Retrieved 2024-09-03.
  87. Coldewey, Devin (2024-08-26). "Elon Musk unexpectedly offers support for California's AI bill". TechCrunch. Retrieved 2024-08-27.
  88. "Dozens of AI workers buck their employers, sign letter in support of Wiener AI bill". The San Francisco Standard. 2024-09-09. Retrieved 2024-09-10.
  89. "Call to Lead". Call to Lead. Retrieved 2024-09-10.
  90. Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-29.
  91. Piper, Kelsey (2024-06-14). "The AI bill that has Big Tech panicked". Vox. Retrieved 2024-07-29.
  92. Robertson, Derek (2024-05-06). "Exclusive poll: Americans favor AI data regulation". Politico. Retrieved 2024-08-18.
  93. 1 2 3 4 Lovely, Garrison (2024-08-28). "Tech Industry Uses Push Poll to Stop California AI Bill". The American Prospect. Retrieved 2024-11-12.
  94. Bordelon, Brendan. "What Kamala Harris means for tech". POLITICO Pro.(subscription required)
  95. "New Poll: California Voters, Including Tech Workers, Strongly Support AI Regulation Bill SB1047". Artificial Intelligence Policy Institute. 22 July 2024.
  96. Sullivan, Mark (2024-08-08). "Elon Musk's Grok chatbot spewed election disinformation". Fast Company. Retrieved 2024-08-13.
  97. "Poll: Californians Support Strong Version of SB1047, Disagree With Anthropic's Proposed Changes". Artificial Intelligence Policy Institute. 7 August 2024. Retrieved 2024-08-13.
  98. Gardiner, Dustin (2024-08-28). "Newsom's scaled-back surrogate role". Politico.
  99. "Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes". Artificial Intelligence Policy Institute. 16 August 2024. Retrieved 2024-08-28.
  100. "New YouGov National Poll Shows 80% Support for SB 1047 and AI Safety". Economic Security Project Action. Retrieved 2024-09-19.
  101. "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  102. Lee, Wendy (2024-06-19). "California lawmakers are trying to regulate AI before it's too late. Here's how". Los Angeles Times.
  103. Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-22.
  104. "Senator Wiener's Landmark AI Bill Passes Assembly". Office of Senator Wiener. 29 August 2024.
  105. 1 2 3 4 5 "Gavin Newsome SB 1047 Veto letter" (PDF). Office of The Governor.
  106. "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology"
  107. "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."
  108. "Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails .should be implemented, and severe consequences for bad actors must be clear and enforceable."
  109. "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology"
  110. Newsom, Gavin (2024-09-29). "Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians". Governor Gavin Newsom. Retrieved 2024-09-29.