Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Last updated
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Seal of California.svg
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Senate votedMay 21, 2024 (32-1)
Sponsor(s) Scott Wiener
Governor Gavin Newsom
BillSB 1047
Website Bill Text

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the goal of reducing the risks of frontier artificial intelligence models, the largest and most powerful foundation models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

Contents

Background

The bill was motivated by the rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022.

In May 2023, AI pioneer Geoffrey Hinton resigned from Google, warning that humankind could be overtaken by AI as soon as the next 5 to 20 years. [1] [2] Later that same month, the Center for AI Safety released a statement signed by Hinton and other AI researchers and leaders: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Governor Newsom and President Biden issued executive orders on artificial intelligence in late 2023. [3] [4] Senator Wiener says his bill draws heavily on the Biden executive order. [5]

Provisions

SB 1047 initially covers AI models with training compute over 1026 integer or floating-point operations. The same compute threshold is used in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In contrast, the European Union's AI Act set its threshold at 1025, one order of magnitude lower. [6]

In addition to this compute threshold, the bill has a cost threshold of $100 million. The goal is to exempt startups and small companies, while covering large companies that spend over $100 million per training run.

Developers of models that exceed the compute and cost thresholds are required to conduct safety testing for the following risks:

Developers of covered models are required to implement reasonable safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents.

The bill establishes a Frontier Model Division to review the results of safety tests and incidents, and issue guidance, standards and best practices. It also creates a public cloud computing cluster called CalCompute to enable research into safe AI models, and provide compute for academics and startups.

Reception

Supporters of the bill include Turing Award recipients Geoffrey Hinton and Yoshua Bengio. [7] The Center for AI Safety, Economic Security California [8] and Encode Justice [9] are sponsors.

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress [lower-alpha 1] , the Computer & Communications Industry Association [lower-alpha 2] and TechNet [lower-alpha 3] . [13] Companies Meta and Google argue that the bill would undermine innovation. [14]

Public opinion

A David Binder Research poll commissioned by the Center for AI Safety Action Fund found that in May 2024, 77% of Californians support a proposal to require companies to test AI models for safety risks before releasing them. [15] A poll by the AI Policy Institute found 77% of Californians think the government should mandate safety testing for powerful AI models. [16]

See also

Notes

  1. whose corporate partners include Amazon, Apple, Google and Meta [10]
  2. whose members include Amazon, Apple, Google and Meta [11]
  3. whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI [12]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

Anthropic PBC is a U.S.-based artificial intelligence (AI) startup public-benefit company, founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.

<span class="mw-page-title-main">Mustafa Suleyman</span> British entrepreneur and activist

Mustafa Suleyman is a British artificial intelligence (AI) entrepreneur. He is the CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind, an AI company acquired by Google. After leaving DeepMind, he co-founded Inflection AI, a machine learning and generative AI company, in 2022.

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with catalyzing widespread interest in AI.

<span class="mw-page-title-main">Partnership on AI</span> Nonprofit coalition

Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

Meta AI is an American company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.

A foundation model, also known as large AI model, is a machine learning or deep learning model that is trained on broad data such that it can be applied across a wide range of use cases. Foundation models have transformed artificial intelligence (AI), powering prominent generative AI applications like ChatGPT. The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) created and popularized the term.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is the 126th executive order signed by U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI.

Mistral AI is a French company specializing in artificial intelligence (AI) products. Founded in April 2023 by former employees of Meta Platforms and Google DeepMind, the company has quickly risen to prominence in the AI sector.

Sneha Revanur is an Indian-American activist. She is the founder and president of Encode Justice, a youth organization advocating for the global regulation of artificial intelligence. In 2023, she was described by Politico as the "Greta Thunberg of AI".

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

References

  1. Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times.
  2. Lazarus, Ben (2023-05-06). "The godfather of AI: why I left Google". The Spectator.
  3. "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  4. "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  5. Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  6. "Artificial Intelligence – Questions and Answers". European Commission. 2023-12-12.
  7. Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  8. DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  9. Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  10. "Corporate Partners". Chamber of Progress.
  11. "Members". Computer & Communications Industry Association.
  12. "Members". TechNet.
  13. Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  14. Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  15. "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  16. "AIPI Survey". AI Policy Institute.