Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy

Last updated

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy is an international norms and arms control proposal by the U.S. government for artificial intelligence in the military. [1] [2]

It was announced at the Summit on Responsible Artificial Intelligence in the Military Domain by Bonnie Jenkins, Under Secretary of State for Arms Control. [3] As of January 2024, fifty-one countries have signed the declaration. [4] The US government sees it as an extension of the Department of Defense Directive 3000.09 which is the current US policy on autonomous weapons. [5]

It covers areas such as Lethal autonomous weapons and weapons decision-making.

Related Research Articles

An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.

Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.

<span class="mw-page-title-main">Dual-use technology</span> Technology that can be used for both peaceful and military purposes

In politics, diplomacy and export control, dual-use items refer to goods, software and technology that can be used for both civilian and military applications.

<span class="mw-page-title-main">Bureau of Political-Military Affairs</span>

The Bureau of Political-Military Affairs (PM) is an agency within the United States Department of State that bridges the Department of State with the Department of Defense. It provides policy in the areas of international security, security assistance, military operations, defense strategy and policy, military use of space, and defense trade. It is headed by the Assistant Secretary of State for Political-Military Affairs.

<span class="mw-page-title-main">Albania and weapons of mass destruction</span>

Albania once possessed a stockpile of weapons of mass destruction. This stockpile of chemical weapons included 16,678 kilograms (36,769 lb) of mustard gas, lewisite, adamsite, and phenacyl chloride (chloroacetophenone).

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Arms race</span> Competition between two or more parties to have superior armed forces

An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more states to have superior armed forces, concerning production of weapons, the growth of a military, and the aim of superior military technology. Unlike a sporting race, which constitutes a specific event with winning interpretable as the outcome of a singular project, arms races constitute spiralling systems of on-going and potentially open-ended behavior.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<span class="mw-page-title-main">Campaign to Stop Killer Robots</span> Coalition of organizations

The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<i>Slaughterbots</i> 2017 film

Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions.

The artificial intelligence (AI) industry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

<span class="mw-page-title-main">Anja Kaspersen</span> Norwegian diplomat and academic

Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.

Shield AI is an American aerospace and defense technology company based in San Diego, California. It develops artificial intelligence-powered fighter pilots, drones, and technology for defense operations. Its clients include the United States Special Operations Command, US Air Force, US Marine Corps, US Navy and several international militaries.

The Summit on Responsible Artificial Intelligence in the Military Domain, also known as REAIM 2023, was a diplomatic conference held in 2023 regarding military uses of artificial intelligence. It was held in the World Forum in The Hague on 15–16 February 2023.

References

  1. Knight, Will. "Should Algorithms Control Nuclear Launch Codes? The US Says No". Wired. Retrieved 27 March 2023.
  2. "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy". U.S. Department of State – Bureau of Arms Control, Verification and Compliance. 16 February 2023. Retrieved 27 March 2023.
  3. "US issues declaration on responsible use of AI in the military". Reuters. 16 February 2023. Retrieved 27 March 2023.
  4. https://www.europeanleadershipnetwork.org/commentary/what-does-global-military-ai-governance-need/
  5. https://www.csis.org/analysis/state-dod-ai-and-autonomy-policy