Executive Order 14110

Last updated
Executive Order 14110
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Seal of the President of the United States.svg
Type Executive order
Executive Order number14110
Signed by Joe Biden on October 30, 2023
Federal Register details
Federal Register document number 2023-24283
Summary
Creates a national approach to governing artificial intelligence. [1]

Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (sometimes referred to as "Executive Order on Artificial Intelligence" [2] [3] ) is the 126th executive order signed by U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI. [4] [5]

Contents

Policy goals outlined in the executive order pertain to promoting competition in the AI industry, preventing AI-enabled threats to civil liberties and national security, and ensuring U.S. global competitiveness in the AI field. [6] The executive order requires a number of major federal agencies to create dedicated "chief artificial intelligence officer" (chief AI officer) positions within their organizations. [7]

Background

The drafting of the order was motivated by the rapid pace of development in generative AI models in the 2020s, including the release of large language model ChatGPT. [8] Executive Order 14110 is the third executive order dealing explicitly with AI, with two AI-related executive orders being signed by then-President Donald Trump. [9] [10]

The development of AI models without policy safeguards has raised a variety of concerns among experts and commentators. These range from future existential risk from advanced AI models to immediate concerns surrounding current technologies' ability to disseminate misinformation, enable discrimination, and undermine national security. [11]

In August 2023, Arati Prabhakar, the director of the Office of Science and Technology Policy, indicated that the White House was expediting its work on executive action on AI. [12] A week prior to the executive order's unveiling, Prabhakar indicated that Office of Management and Budget (OMB) guidance on the order would be released "soon" after. [13]

Policy goals and provisions

White House graphic listing the provisions of the executive order WH-AI.jpg
White House graphic listing the provisions of the executive order

The order has been characterized as an effort for the United States to capture potential benefits from AI while mitigating risks associated with AI technologies. [14] Upon signing the order, Biden stated that AI technologies were being developed at "warp speed", and argued that to "realize the promise of AI and avoid the risk, we need to govern this technology". [15]

Policy goals outlined by the order include the following: [6]

Impact on agencies

Creation of chief AI officer positions

The executive order requires a number of large federal agencies to appoint a chief artificial intelligence officer, with a number of departments having already appointed a relevant officer prior to the order. In the days following the order, news publication FedScoop confirmed that the General Services Administration (GSA) and the United States Department of Education appointed relevant chief AI officers. The National Science Foundation (NSF) also confirmed it had elevated an official to serve as its chief AI officer. [7]

Department responsibilities

Under the executive order, the Department of Homeland Security (DHS) will be responsible for developing AI-related security guidelines, including cybersecurity-related matters. The DHS will also work with private sector firms in sectors including the energy industry and other "critical infrastructure" to coordinate responses to AI-enabled security threats. [16] Executive Order 14110 mandated the Department of Veterans Affairs to launch an AI technology competition aimed at reducing occupational burnout among healthcare workers through AI-assisted tools for routine tasks. [17]

The order also mandates the Department of Commerce's National Institute of Standards and Technology (NIST) to develop a generative artificial intelligence-focused resource to supplement the existing AI Risk Management Framework. [18]

Analysis

President Biden visiting a meeting held by Vice President Kamala Harris alongside CEOs of AI companies on May 4, 2023 President Joe Biden drops by a meeting with Vice President Kamala Harris and AI CEO's on May 4, 2023 in the Roosevelt Room of the White House - P20230504AS-0693.jpg
President Biden visiting a meeting held by Vice President Kamala Harris alongside CEOs of AI companies on May 4, 2023

The executive order has been described as most comprehensive piece of governance by the United States government pertaining to AI. [4] [5] Earlier in 2023 prior to the signing of the order, the Biden administration had announced a Blueprint for an AI Bill of Rights, and had secured non-binding AI safety commitments from major tech companies. The issuing of the executive order comes at a time in which lawmakers including Senate Majority Leader Chuck Schumer have pushed for legislation to regulate AI in the 118th United States Congress. [19]

According to Axios, despite the wide scope of the executive order, it notably does not touch upon a number of AI-related policy proposals. This includes proposals for a "licensing regime" to government advanced AI models, which has received support from industry leaders including Sam Altman. Additionally, the executive order does not seek to prohibit 'high-risk' uses of AI technology, and does not aim to mandate that tech companies release information surrounding AI systems' training data and models. [20]

Reception

Political and media reception

The editorial board of the Houston Chronicle described the order as a "first step toward protecting humanity". [21] The issuing of the order received praise from Democratic members of Congress, including Senator Richard Blumenthal (D-CT) and Representative Ted Lieu (D-CA). [22] Representative Don Beyer (D-VA), who leads the House AI Caucus, praised the order as a "comprehensive strategy for responsible innovation", while arguing that Congress must take initiative to pass legislation on AI. [19]

The draft of the order received criticism from Republican Senator Ted Cruz (R-TX), who described it as creating "barriers to innovation disguised as safety measures". [23]

Industry reception

The executive order received strong criticism from the Chamber of Commerce as well as tech industry groups including NetChoice and the Software and Information Industry Association, all of which count "Big Tech" companies Amazon, Meta, and Google as members. Representatives from the organizations argued that the executive order threatens to hinder private sector innovation. [24]

Civil society reception

According to CNBC , a number of leaders of advocacy organizations praised the executive order for its provisions on "AI fairness", while simultaneously urging congressional action to strengthen regulation. Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, praised the order while urging Congress to take initiative to "ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped". A representative from the American Civil Liberties Union (ACLU) praised provisions of the order centered on combating AI-enabled discrimination, while also voiced concern over sections of the order focused on law enforcement and national security. [25]

See also

Related Research Articles

Center for Democracy & Technology (CDT) is a Washington, D.C.–based 501(c)(3) nonprofit organisation that advocates for digital rights and freedom of expression. CDT seeks to promote legislation that enables individuals to use the internet for purposes of well-intent, while at the same time reducing its potential for harm. It advocates for transparency, accountability, and limiting the collection of personal information.

<span class="mw-page-title-main">Office of Science and Technology Policy</span> Department of the United States government

The Office of Science and Technology Policy (OSTP) is a department of the United States government, part of the Executive Office of the President (EOP), established by United States Congress on May 11, 1976, with a broad mandate to advise the President on the effects of science and technology on domestic and international affairs.

<span class="mw-page-title-main">Jason Gaverick Matheny</span> American national security expert

Jason Gaverick Matheny is a United States national security expert serving as president and CEO of the RAND Corporation since July 2022. He was previously a senior appointee in the Biden administration from March 2021 to June 2022. He served as deputy assistant to the president for technology and national security, deputy director for national security in the White House Office of Science and Technology Policy and coordinator for technology and national security at the White House National Security Council.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, or the military.

<span class="mw-page-title-main">Arati Prabhakar</span> American engineer (born 1959)

Arati Prabhakar is an American engineer and public official. Since October 3, 2022, she has served as the 12th director of the White House Office of Science and Technology Policy and Science Advisor to the President.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

The artificial intelligence (AI) industry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following economic reforms emphasizing science and technology as the country's primary productive force.

The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service. CSET's founding director is the former director of the Intelligence Advanced Research Projects Activity, Jason Gaverick Matheny. Its current executive director is Dewey Murdick, former Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security.

<span class="mw-page-title-main">Mila (research institute)</span> Research laboratory in Montreal, Canada

Mila - Quebec AI Institute is a research institute in Montreal, Quebec, focusing mainly on machine learning research. Approximately 1000 students and researchers and 100 faculty members, were part of Mila in 2022. Along with Alberta's Amii and Toronto's Vector Institute, Mila is part of the Pan-Canadian Artificial Intelligence Strategy.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

<span class="mw-page-title-main">Joint Artificial Intelligence Center</span>

The Joint Artificial Intelligence Center (JAIC) (pronounced "jake") was an American organization on exploring the usage of Artificial Intelligence (AI) (particularly Edge computing), Network of Networks and AI-enhanced communication for use in actual combat. In February 2022, JAIC was integrated into the Chief Digital and Artificial Intelligence Office (CDAO).

<span class="mw-page-title-main">Artificial Intelligence Cold War</span> Geopolitical narrative

The Artificial Intelligence Cold War is a narrative in which tensions between the United States of America and the People's Republic of China lead to a second Cold War waged in the area of artificial intelligence technology rather than in the areas of nuclear capabilities or ideology. The context of the AI Cold War narrative is the AI arms race, which involves a build-up of military capabilities using AI technology by the US and China and the usage of increasingly advanced semiconductors which power those capabilities.

Shield AI is an American aerospace and defense technology company based in San Diego, California. It develops artificial intelligence-powered fighter pilots, drones, and technology for defense operations. Its clients include the United States Special Operations Command, US Air Force, US Marine Corps, US Navy and several international militaries.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

<span class="mw-page-title-main">AI boom</span> Rapid progress in artificial intelligence

The AI boom, or AI spring, is the ongoing period of rapid progress in the field of artificial intelligence. Prominent examples include generative AI and protein folding prediction, led by laboratories including Google DeepMind and OpenAI.

The Special Competitive Studies Project (SCSP) is a non-partisan U.S. think tank and private foundation focused on technology and security. Founded by former Google CEO Eric Schmidt in October 2021, SCSP's stated mission is to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society." It seeks to ensure that "America is positioned and organized to win the techno-economic competition between now and 2030."

The Interim Measures for the Management of Generative AI Services are a set of measures introduced by China to regulate public-facing generative artificial intelligence within the country. They are set to take effect on 15 August 2023. The finalized text was issued to the public on 10 July 2023.

<span class="mw-page-title-main">Vilas Dhar</span> Nonprofit executive

Vilas Dhar is a global expert on artificial intelligence policy and a technology and philanthropy executive. He is President and Trustee of the Patrick J. McGovern Foundation. The Foundation focuses on Artificial Intelligence and data solutions. The Foundation was created upon the death of IDG's founder Patrick J. McGovern when the ownership of IDG was transferred to the Foundation.

References

  1. "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". The White House. 2023-10-30. Retrieved 2023-11-12.
  2. "OMB Releases Implementation Guidance Following President Biden's Executive Order on Artificial Intelligence | OMB". The White House. 2023-11-01. Retrieved 2023-11-25.
  3. Wu, Tim (2023-11-07). "Opinion | In Regulating A.I., We May Be Doing Too Much. And Too Little". The New York Times. ISSN   0362-4331 . Retrieved 2023-11-25. When President Biden signed his sweeping executive order on artificial intelligence last week...
  4. 1 2 Tate Ryan-Mosley; Melissa Heikkilä (2023-10-30). "Three things to know about the White House's executive order on AI". MIT Technology Review.
  5. 1 2 Lima, Cristiano; Zakrzewski, Cat (2023-10-30). "Biden Signs AI Executive Order, the Most Expansive Regulatory Attempt Yet". The Washington Post. Retrieved 2023-11-12.
  6. 1 2 Bridget Neill; John D. Hallmark; Richard J. Jackson; Dan Diasio (2023-10-31). "Key Takeaways from the Biden Administration Executive Order on AI". EY. Retrieved 2023-11-12.
  7. 1 2 Alder, Madison; Heilweil, Rebecca (2023-11-21). "Aronson takes chief AI officer position at NSF as agencies begin work on Biden executive order". FedScoop. Retrieved 2023-11-28.
  8. Leffer, Lauren (2023-10-31). "Biden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough". Scientific American. Retrieved 2023-11-12.
  9. "Maintaining American Leadership in Artificial Intelligence". Federal Register. 2019-02-14.
  10. "Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government". Federal Register. 2020-12-08.
  11. Leffer, Lauren. "Biden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough". Scientific American. Retrieved 2023-11-29.
  12. Groll, Elias (2023-08-12). "White House is fast-tracking executive order on artificial intelligence". CyberScoop. Retrieved 2023-11-25.
  13. Krishan, Nihal (2023-10-26). "OSTP's Arati Prabhakar says OMB guidance on AI to be released 'soon' after AI executive order". FedScoop. Retrieved 2023-11-25.
  14. Kang, Cecilia; Sanger, David E. (2023-10-30). "Biden Issues Executive Order to Create A.I. Safeguards". The New York Times. Retrieved 2023-11-12.
  15. "Biden wants to move fast on AI safeguards and signs an executive order to address his concerns". AP News. 2023-10-30. Retrieved 2023-11-25.
  16. Gilmer, Ellen M.; Riley, Tonya (2023-11-27). "AI Goals Stretch Homeland Agency's Resources, Privacy Safeguards". Bloomberg Law. Retrieved 2023-11-28.
  17. Nihill, Caroline (2023-10-31). "VA launches tech sprint for health care innovation required by AI executive order". FedScoop. Retrieved 2023-11-28.
  18. Alder, Madison (2023-11-02). "NIST seeks participants for new artificial intelligence consortium". FedScoop. Retrieved 2023-11-29.
  19. 1 2 Morrison, Sara (2023-10-31). "President Biden's new plan to regulate AI". Vox. Retrieved 2023-11-29.
  20. Heath, Ryan (2023-11-01). "What's in Biden's AI executive order — and what's not". Axios. Retrieved 2023-11-28.
  21. Houston Chronicle Editorial Board (2023-11-03). "Biden's AI executive order is a first step toward protecting humanity (Editorial)". Houston Chronicle. Retrieved 2023-11-25.
  22. "Biden's Executive Order Ensures Safe, Secure AI: Reactions". Mirage News. 2023-11-01. Retrieved 2023-11-25.
  23. Chatterjee, Mohar (2023-10-30). "White House offers a new strategy for AI — and picks new fights". POLITICO. Retrieved 2023-11-25.
  24. Krishan, Nihal (2023-11-09). "Tech groups push back on Biden AI executive order, raising concerns that it could crush innovation". FedScoop. Retrieved 2023-11-29.
  25. Feiner, Lauren; Field, Hayden (2023-11-02). "Biden's AI order didn't go far enough to address fairness, but it's a good first step, advocates say". CNBC. Retrieved 2023-11-29.