Regulation of AI in the United States

Last updated

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts. [1]

Contents

Federal Government regulatory measures

As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, [2] the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". [3] These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.

The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. [4] On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States." [5] Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. [6] The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States. [7] [8]

On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, [9] the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, [10] which includes ten principles for United States agencies when deciding whether and how to regulate AI. [11] In response, the National Institute of Standards and Technology has released a position paper, [12] and the Defense Innovation Board has issued recommendations on the ethical use of AI. [13] A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications. [14]

Other specific agencies working on the regulation of AI include the Food and Drug Administration, [15] which has created pathways to regulate the incorporation of AI in medical imaging. [16] National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan, [17] which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI. [18]

In March 2021, the National Security Commission on Artificial Intelligence released their final report. [19] In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."

In June 2022, Senators Rob Portman and Gary Peters introduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk". [20] [21] On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights, [22] which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology. [23]

In July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation. In September 2023, eight additional companies – Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI – subscribed to these voluntary commitments. [24] [25]

The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies. [26] On October 30, 2023, President Biden released this Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects. [27]

The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development. [28]

The Executive Order builds on the Administration’s earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government.

The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not “worsen job quality”, and should not “cause labor-force disruptions”. Additionally, Biden’s Executive Order mandates that AI must “advance equity and civil rights”, and cannot disadvantage marginalized groups. [29] It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers. [30]

State and Local Government interventions

In January 2023, the New York City Bias Audit Law (Local Law 144 [31] ) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023. [32] From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias.

On March 21, 2024, the State of Tennessee enacted legislation called the ELVIS Act, aimed specifically at audio deepfakes, and voice cloning. [33] This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness. [34] The bill passed unanimously in the Tennessee House of Representatives and Senate. [35] This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses. [36] [37]

In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to the California legislature. The bill has the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill will also establish a publicly-funded cloud computing cluster in California. [38]

Grassroots perspectives

In 2016, Joy Buolamwini, AI researcher at Massachusetts Institute of Technology, shared her personal experiences with discrimination in facial recognition software at a TED Talk conference. [39] Facial recognition software is vastly understood to be inaccurate in its identification of darker-skinned peoples, which matters especially in the context of policing, the criminal justice system, healthcare system, and employment sectors. [40]

In 2022, the PEW Research Center's study of Americans revealed that only 18% of respondents are more excited than they are concerned about AI. [41] Biases in AI algorithms and methods that lead to discrimination are causes for concern among many activist organizations and academic institutions. Recommendations include increasing diversity among creators of AI algorithms and addressing existing systemic bias in current legislation and AI development practices. [40] [42]

Related Research Articles

Center for Democracy & Technology (CDT) is a Washington, D.C.-based 501(c)(3) nonprofit organisation that advocates for digital rights and freedom of expression. CDT seeks to promote legislation that enables individuals to use the internet for purposes of well-intent, while at the same time reducing its potential for harm. It advocates for transparency, accountability, and limiting the collection of personal information.

The Committee on Foreign Investment in the United States is an inter-agency committee in the United States government that reviews the national security implications of foreign investments in U.S. companies or operations, using classified information from the United States Intelligence Community.

<span class="mw-page-title-main">Office of Science and Technology Policy</span> Department of the United States government

The Office of Science and Technology Policy (OSTP) is a department of the United States government, part of the Executive Office of the President (EOP), established by United States Congress on May 11, 1976, with a broad mandate to advise the President on the effects of science and technology on domestic and international affairs.

<span class="mw-page-title-main">Defense Production Act of 1950</span> United States law

The Defense Production Act (DPA) of 1950 is a United States federal law enacted on September 8, 1950, in response to the start of the Korean War. It was part of a broad civil defense and war mobilization effort in the context of the Cold War. Its implementing regulations, the Defense Priorities and Allocation System (DPAS), are located at 15 CFR §§700 to 700.93. Since 1950, the Act has been reauthorized over 50 times. It has been periodically amended and remains in force.

The Leadership Conference on Civil and Human Rights is an American coalition of more than 240 national civil and human rights organizations and acts as an umbrella group for American civil and human rights. Founded as the Leadership Conference on Civil Rights (LCCR) in 1950 by civil rights activists Arnold Aronson, A. Philip Randolph, and Roy Wilkins, the coalition has focused on issues ranging from educational equity to justice reform to voting rights.

<span class="mw-page-title-main">National Science and Technology Council</span> The NSTC establishes national goals for science and technology.

The National Science and Technology Council (NSTC) is a council in the Executive Branch of the United States. It is designed to coordinate science and technology policy across the branches of federal government.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

The environmental policy of the Joe Biden administration includes a series of laws, regulations, and programs introduced by United States President Joe Biden since he took office in January 2021. Many of the actions taken by the Biden administration reversed the policies of his predecessor, Donald Trump. Biden's climate change policy focuses on reducing greenhouse gas emissions, similar to the efforts taken by the Obama administration. Biden promised to end and reverse deforestation and land degradation by 2030. The main climate target of the Biden administration is to reduce greenhouse gas emissions by the United States to net zero by 2050. A climate team was created to lead the effort.

The social policy of the Joe Biden administration is intended to improve racial equity, increase access to safe and legal abortions, tighten restrictions on gun sales, among other aims. A number of policies aim to reverse the former policies of President Donald Trump, including the "Muslim" travel ban and loosened anti-discriminatory policies relating to LGBT people.

<span class="mw-page-title-main">Artificial Intelligence Cold War</span> Geopolitical narrative

The Artificial Intelligence Cold War is a narrative in which tensions between the United States of America, the Russian Federation, and the People's Republic of China lead to a second Cold War waged in the area of artificial intelligence technology rather than in the areas of nuclear capabilities or ideology. The context of the AI Cold War narrative is the AI arms race, which involves a build-up of military capabilities using AI technology by the US and China and the usage of increasingly advanced semiconductors which power those capabilities.

<span class="mw-page-title-main">Artificial Intelligence Act</span> 2024 European Union regulation on artificial intelligence

The Artificial Intelligence Act is a European Union regulation concerning artificial intelligence (AI).

<span class="mw-page-title-main">CHIPS and Science Act</span> United States legislation promoting the semiconductor industry and public basic research

The CHIPS and Science Act is a U.S. federal statute enacted by the 117th United States Congress and signed into law by President Joe Biden on August 9, 2022. The act authorizes roughly $280 billion in new funding to boost domestic research and manufacturing of semiconductors in the United States, for which it appropriates $52.7 billion. The act includes $39 billion in subsidies for chip manufacturing on U.S. soil along with 25% investment tax credits for costs of manufacturing equipment, and $13 billion for semiconductor research and workforce training, with the dual aim of strengthening American supply chain resilience and countering China. It also invests $174 billion in the overall ecosystem of public sector research in science and technology, advancing human spaceflight, quantum computing, materials science, biotechnology, experimental physics, research security, social and ethical considerations, workforce development and diversity, equity, and inclusion efforts at NASA, NSF, DOE, EDA, and NIST.

Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is the 126th executive order signed by U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI.

Sneha Revanur is an Indian-American activist. She is the founder and president of Encode Justice, a youth organization advocating for the global regulation of artificial intelligence. In 2023, she was described by Politico as the "Greta Thunberg of AI".

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the goal of reducing the risks of frontier artificial intelligence models, the largest and most powerful foundation models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

An AI Safety Institute (AISI), in general, is a state-backed institute aiming to evaluate and ensure the safety of the most advanced artificial intelligence (AI) models, also called frontier AI models.

References

  1. Weaver, John Frank (2018-12-28). "Regulation of artificial intelligence in the United States". Research Handbook on the Law of Artificial Intelligence: 155–212. doi:10.4337/9781786439055.00018. ISBN   9781786439055.
  2. "The Administration's Report on the Future of Artificial Intelligence". whitehouse.gov. 2016-10-12. Retrieved 2023-11-01.
  3. National Science and Technology Council Committee on Technology (October 2016). "Preparing for the Future of Artificial Intelligence". whitehouse.gov via National Archives.
  4. "National Strategic Research and Development Plan for Artificial Intelligence" (PDF). National Science and Technology Council. October 2016.
  5. "About". National Security Commission on Artificial Intelligence. Retrieved 2020-06-29.
  6. Stefanik, Elise M. (2018-05-22). "H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018". www.congress.gov. Retrieved 2020-03-13.
  7. Heinrich, Martin (2019-05-21). "Text - S.1558 - 116th Congress (2019–2020): Artificial Intelligence Initiative Act". www.congress.gov. Retrieved 2020-03-29.
  8. Scherer, Matthew U. (2015). "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies". SSRN Working Paper Series. doi:10.2139/ssrn.2609777. ISSN   1556-5068.
  9. "Executive Order on Maintaining American Leadership in Artificial Intelligence – The White House". trumpwhitehouse.archives.gov. Retrieved 2023-11-01.
  10. Vought, Russell T. "MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES - Guidance for Regulation of Artificial Intelligence Applications" (PDF). The White House.
  11. "AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation". Inside Tech Media. 2020-01-14. Retrieved 2020-03-25.
  12. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF). National Institute of Science and Technology. 2019.
  13. AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense (PDF). Washington, DC: United States Defense Innovation Board. 2019. OCLC   1126650738.
  14. "Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"". Federal Register. 2020-01-13. Retrieved 2020-11-28.
  15. Hwang, Thomas J.; Kesselheim, Aaron S.; Vokinger, Kerstin N. (2019-12-17). "Lifecycle Regulation of Artificial Intelligence– and Machine Learning–Based Software Devices in Medicine". JAMA. 322 (23): 2285–2286. doi:10.1001/jama.2019.16842. ISSN   0098-7484. PMID   31755907. S2CID   208230202.
  16. Kohli, Ajay; Mahajan, Vidur; Seals, Kevin; Kohli, Ajit; Jha, Saurabh (2019). "Concepts in U.S. Food and Drug Administration Regulation of Artificial Intelligence for Medical Imaging". American Journal of Roentgenology. 213 (4): 886–888. doi:10.2214/ajr.18.20410. ISSN   0361-803X. PMID   31166758. S2CID   174813195.
  17. National Science Technology Council (June 21, 2019). "The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update" (PDF).
  18. Gursoy, Furkan; Kakadiaris, Ioannis A. (2023). "Artificial intelligence research strategy of the United States: critical assessment and policy recommendations". Frontiers in Big Data. 6. doi: 10.3389/fdata.2023.1206139 . ISSN   2624-909X. PMC   10440374 . PMID   37609602.
  19. NSCAI Final Report (PDF). Washington, DC: The National Security Commission on Artificial Intelligence. 2021.
  20. Homeland Newswire (2022-06-25). "Portman, Peters Introduce Bipartisan Bill to Ensure Federal Government is Prepared for Catastrophic Risks to National Security". HomelandNewswire. Archived from the original on June 25, 2022. Retrieved 2022-07-04.
  21. "Text - S.4488 - 117th Congress (2021–2022): A bill to establish an interagency committee on global catastrophic risk, and for other purposes. | Congress.gov | Library of Congress". Congress.gov. 2022-06-23. Retrieved 2022-07-04.
  22. "Blueprint for an AI Bill of Rights | OSTP". The White House. Retrieved 2023-11-01.
  23. "The White House just unveiled a new AI Bill of Rights". MIT Technology Review. Retrieved 2023-10-24.
  24. House, The White (2023-07-21). "FACT SHEET: Biden–Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI". The White House. Retrieved 2023-09-25.
  25. House, The White (2023-09-12). "FACT SHEET: Biden–Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI". The White House. Retrieved 2023-09-25.
  26. Chatterjee, Mohar (2023-10-12). "White House AI order to flex federal buying power". POLITICO. Retrieved 2023-10-27.
  27. House, The White (2023-10-30). "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". The White House. Retrieved 2023-12-05.
  28. Lewis, James Andrew; Benson, Emily; Frank, Michael (2023-10-31). "The Biden Administration's Executive Order on Artificial Intelligence".
  29. House, The White (2023-10-30). "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence". The White House. Retrieved 2023-12-05.
  30. Lanum, Nikolas (2023-11-07). "President Biden's AI executive order has 'dangerous limitations,' says deepfake detection company CEO". FOXBusiness. Retrieved 2023-12-05.
  31. "A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools". The New York City Council. Retrieved 2023-11-01.
  32. Kestenbaum, Jonathan (July 5, 2023). "NYC's New AI Bias Law Broadly Impacts Hiring and Requires Audits". Bloomberg Law. Retrieved 2023-10-24.
  33. Kristin Robinson (2024). "Tennessee Adopts ELVIS Act, Protecting Artists' Voices From AI Impersonation". The New York Times. Retrieved March 26, 2024.
  34. Ashley King (2024). "The ELVIS Act Has Officially Been Signed Into Law — First State-Level AI Legislation In the US". Digital Music News. Retrieved March 26, 2024.
  35. Tennessee House (2024). "House Floor Session - 44th Legislative Day" (video). Tennessee House. Retrieved March 26, 2024.
  36. Audrey Gibbs (2024). "TN Gov. Lee signs ELVIS Act into law in honky-tonk, protects musicians from AI abuses". The Tennessean. Retrieved March 26, 2024.
  37. Alex Greene (2024). "The ELVIS Act". Memphis Flyer. Retrieved March 26, 2024.
  38. De Vynck, Gerrit (2024-02-08). "In Big Tech's backyard, California lawmaker unveils landmark AI bill". The Washington Post.
  39. Buolamwini, Joy (November 2016). "How I'm fighting bias in algorithms" . Retrieved February 1, 2024.
  40. 1 2 "How Artificial Intelligence Bias Affects Women and People of Color". Berkeley School of Information. December 8, 2021. Retrieved February 1, 2024.
  41. Raine, Lee (March 17, 2022). "1. How Americans think about artificial intelligence". Pew Research Center. Retrieved May 11, 2024.
  42. Akselrod, Olga (July 13, 2021). "How Artificial Intelligence Can Deepen Racial and Economic Inequities". ACLU. Retrieved February 1, 2024.