Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 [update] was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
An artificial agent which, at the very minimum, is able to change its own internal states to achieve a given goal, or set of goals, within its dynamic operating environment and without the direct intervention of another agent and may also be endowed with some abilities for changing its own transition rules without the intervention of another agent, and which is deployed with the purpose of exerting kinetic force against a physical entity (whether an object or a human being) and to this end is able to identify, select or attack the target without the intervention of another agent is an AWS. Once deployed, AWS can be operated with or without some forms of human control (in, on or out the loop). A lethal AWS is specific subset of an AWS with the goal of exerting kinetic force against human beings. [1]
Being "autonomous" has different meanings in different fields of study. In terms of military weapon development, the identification of a weapon as autonomous is not as clear as in other areas. [2] The specific standard entailed in the concept of being autonomous can vary hugely between different scholars, nations and organizations.
Various people have many definitions of what constitutes a lethal autonomous weapon. The official United States Department of Defense Policy on Autonomy in Weapon Systems, defines an Autonomous Weapons Systems as, "A weapon system that, once activated, can select and engage targets without further intervention by a human operator." [3] Heather Roff, a writer for Case Western Reserve University School of Law, describes autonomous weapon systems as "armed weapons systems, capable of learning and adapting their 'functioning in response to changing circumstances in the environment in which [they are] deployed,' as well as capable of making firing decisions on their own." [4] This definition of autonomous weapon systems is a fairly high threshold compared to the definitions of scholars such as Peter Asaro and Mark Gubrud's definitions seen below.
Scholars such as Peter Asaro and Mark Gubrud are trying to set the threshold lower and judge more weapon systems as autonomous. They believe that any weapon system that is capable of releasing a lethal force without the operation, decision, or confirmation of a human supervisor can be deemed autonomous. According to Gubrud, a weapon system operating partially or wholly without human intervention is considered autonomous. He argues that a weapon system does not need to be able to make decisions completely by itself in order to be called autonomous. Instead, it should be treated as autonomous as long as it actively involves in one or multiple parts of the "preparation process", from finding the target to finally firing. [5] [6]
Other organizations, however, are setting the standard of autonomous weapon system in a higher position. The British Ministry of Defence defines autonomous weapon systems as "systems that are capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though. While the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be." [7]
As a result, the composition of a treaty between states requires a commonly accepted labeling of what exactly constitutes an autonomous weapon. [8]
The oldest automatically triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Anti-personnel mines are banned in many countries by the 1997 Ottawa Treaty, not including the United States, Russia, and much of Asia and the Middle East.
Some current examples of LAWs are automated "hardkill" active protection systems, such as a radar-guided CIWS systems used to defend ships that have been in use since the 1970s (e.g., the US Phalanx CIWS). Such systems can autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft and surface vessels according to criteria set by the human operator. Similar systems exist for tanks, such as the Russian Arena, the Israeli Trophy, and the German AMAP-ADS. Several types of stationary sentry guns, which can fire at humans and vehicles, are used in South Korea and Israel. Many missile defence systems, such as Iron Dome, also have autonomous targeting capabilities.
The main reason for not having a "human in the loop" in these systems is the need for rapid response. They have generally been used to protect personnel and installations against incoming projectiles.
According to The Economist , as technology advances, future applications of unmanned undersea vehicles might include mine clearance, mine-laying, anti-submarine sensor networking in contested waters, patrolling with active sonar, resupplying manned submarines, and becoming low-cost missile platforms. [9] In 2018, the U.S. Nuclear Posture Review alleged that Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" named "Status 6". [10]
The Russian Federation is actively developing artificially intelligent missiles, [11] drones, [12] unmanned vehicles, military robots and medic robots. [13] [14] [15] [16]
Israeli Minister Ayoob Kara stated in 2017 that Israel is developing military robots, including ones as small as flies. [17]
In October 2018, Zeng Yi, a senior executive at the Chinese defense firm Norinco, gave a speech in which he said that "In future battlegrounds, there will be no people fighting", and that the use of lethal autonomous weapons in warfare is "inevitable". [18] In 2019, US Defense Secretary Mark Esper lashed out at China for selling drones capable of taking life with no human oversight. [19]
The British Army deployed new unmanned vehicles and military robots in 2019. [20]
The US Navy is developing "ghost" fleets of unmanned ships. [21]
In 2020 a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer robot armed with lethal weaponry attacked human beings. [22] [23]
In May 2021 Israel conducted an AI guided combat drone swarm attack in Gaza. [24]
Since then there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world. [25]
In addition, DARPA is working on making swarms of 250 autonomous lethal drones available to the American Military. [26]
Three classifications of the degree of human control of autonomous weapon systems were laid out by Bonnie Docherty in a 2012 Human Rights Watch report. [27]
Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." [28] However, the policy requires that autonomous weapon systems that kill people or use kinetic force, selecting and engaging targets without further human intervention, be certified as compliant with "appropriate levels" and other standards, not that such weapon systems cannot meet these standards and are therefore forbidden. [29] "Semi-autonomous" hunter-killers that autonomously identify and attack targets do not even require certification. [29] Deputy Defense Secretary Robert O. Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so. [30] In October 2016 President Barack Obama stated that early in his career he was wary of a future in which a US president making use of drone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate". [31] [32] In the US, security-related AI has fallen under the purview of the National Security Commission on Artificial Intelligence since 2018. [33] [34] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. A major concern is how the report will be implemented. [35]
Stuart Russell, professor of computer science from University of California, Berkeley stated the concern he has with LAWs is that his view is that it is unethical and inhumane. The main issue with this system is it is hard to distinguish between combatants and non-combatants. [36]
There is concern by some economists [37] and legal scholars about whether LAWs would violate International Humanitarian Law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and the principle of proportionality, which requires that damage to civilians be proportional to the military aim. [38] This concern is often invoked as a reason to ban "killer robots" altogether - but it is doubtful that this concern can be an argument against LAWs that do not violate International Humanitarian Law. [39] [40] [41]
A 2021 report by the American Congressional Research Service states that "there are no domestic or international legal prohibitions on the development of use of LAWs," although it acknowledges ongoing talks at the UN Convention on Certain Conventional Weapons (CCW). [42]
LAWs are said by some to blur the boundaries of who is responsible for a particular killing. [43] [37] Philosopher Robert Sparrow argues that autonomous weapons are causally but not morally responsible, similar to child soldiers. In each case, he argues there is a risk of atrocities occurring without an appropriate subject to hold responsible, which violates jus in bello. [44] Thomas Simpson and Vincent Müller argue that they may make it easier to record who gave which command. [45] Potential IHL violations by LAWs are – by definition – only applicable in conflict settings that involve the need to distinguish between combatants and civilians. As such, any conflict scenario devoid of civilians' presence – i.e. in space or the deep seas – would not run into the obstacles posed by IHL. [46]
The possibility of LAWs has generated significant debate, especially about the risk of "killer robots" roaming the earth - in the near or far future. The group Campaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an artificial intelligence arms race and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. [47] [48]
According to PAX For Peace (one of the founding organisations of the Campaign to Stop Killer Robots), fully automated weapons (FAWs) will lower the threshold of going to war as soldiers are removed from the battlefield and the public is distanced from experiencing war, giving politicians and other decision-makers more space in deciding when and how to go to war. [49] They warn that once deployed, FAWs will make democratic control of war more difficult - something that author of Kill Decision - a novel on the topic - and IT specialist Daniel Suarez also warned about: according to him it might recentralize power into very few hands by requiring very few people to go to war. [49]
There are websites[ clarification needed ] protesting the development of LAWs by presenting undesirable ramifications if research into the appliance of artificial intelligence to designation of weapons continues. On these websites, news about ethical and legal issues are constantly updated for visitors to recap with recent news about international meetings and research articles concerning LAWs. [50]
The Holy See has called for the international community to ban the use of LAWs on several occasions. In November 2018, Archbishop Ivan Jurkovic, the permanent observer of the Holy See to the United Nations, stated that “In order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrow’s warfare.” The Church worries that these weapons systems have the capability to irreversibly alter the nature of warfare, create detachment from human agency and put in question the humanity of societies. [51]
As of 29 March 2019 [update] , the majority of governments represented at a UN meeting to discuss the matter favoured a ban on LAWs. [52] A minority of governments, including those of Australia, Israel, Russia, the UK, and the US, opposed a ban. [52] The United States has stated that autonomous weapons have helped prevent the killing of civilians. [53]
In December 2022, a vote of the San Francisco Board of Supervisors to authorize San Francisco Police Department use of LAWs drew national attention and protests. [54] [55] The Board reversed this vote in a subsequent meeting. [56]
A third approach focuses on regulating the use of autonomous weapon systems in lieu of a ban. [57] Military AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal ('Track II') diplomacy by communities of experts, together with a legal and political verification process. [58] [59] [60] [61] In 2021, the United States Department of Defense requested a dialogue with the Chinese People's Liberation Army on AI-enabled autonomous weapons but was refused. [62]
A summit of 60 countries was held in 2023 on the responsible use of AI in the military. [63]
On 22 December 2023, a United Nations General Assembly resolution was adopted to support international discussion regarding concerns about LAWs. The vote was 152 in favor, four against, and 11 abstentions. [64]
An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.
An unmanned aerial vehicle (UAV), commonly known as a drone, is an aircraft with no human pilot, crew, or passengers on board. UAVs were originally developed through the twentieth century for military missions too "dull, dirty or dangerous" for humans, and by the twenty-first, they had become essential assets to most militaries. As control technologies improved and costs fell, their use expanded to many non-military applications. These include aerial photography, area coverage, precision agriculture, forest fire monitoring, river monitoring, environmental monitoring, policing and surveillance, infrastructure inspections, smuggling, product deliveries, entertainment, and drone racing.
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.
Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.
An unmanned combat aerial vehicle (UCAV), also known as a combat drone, fighter drone or battlefield UAV, is an unmanned aerial vehicle (UAV) that is used for intelligence, surveillance, target acquisition, and reconnaissance and carries aircraft ordnance such as missiles, anti-tank guided missiles (ATGMs), and/or bombs in hardpoints for drone strikes. These drones are usually under real-time human control, with varying levels of autonomy. UCAVs are used for reconnaissance, attacking targets and returning to base; unlike kamikaze drones which are only made to explode on impact, or surveillance drones which are only for gathering intelligence.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
Unmanned underwater vehicles (UUV), also known as uncrewed underwater vehicles and underwater drones, are submersible vehicles that can operate underwater without a human occupant. These vehicles may be divided into two categories: remotely operated underwater vehicles (ROUVs) and autonomous underwater vehicles (AUVs). ROUVs are remotely controlled by a human operator. AUVs are automated and operate independently of direct human input.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Drone warfare is a form of warfare using robots. Robot types include: unmanned combat aerial vehicles (UCAV) or weaponized commercial unmanned aerial vehicles (UAV), unmanned surface vehicles, and ground based drones. The United States, United Kingdom, Israel, China, South Korea, Iran, Iraq, Italy, France, India, Pakistan, Russia, Turkey, Ukraine and Poland are known to have manufactured operational UCAVs as of 2019.
The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.
A loitering munition, also known as a suicide drone, kamikaze drone, or exploding drone, is a kind of aerial weapon with a built-in warhead that is typically designed to loiter around a target area until a target is located, then attack the target by crashing into it. Loitering munitions enable faster reaction times against hidden targets that emerge for short periods without placing high-value platforms near the target area and also allow more selective targeting as the attack can be changed mid-flight or aborted.
The International Committee for Robot Arms Control (ICRAC) is a "not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons." It is concerned about the dangers that autonomous military robots, or lethal autonomous weapons, pose to peace and international security and to civilians in war.
Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.
A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.
The JARI USV is an uncrewed surface vehicle developed by the China Shipbuilding Industry Corporation (CSIC), specifically between its No. 716 Research Institute, the Jiangsu Automation Research Institute (JARI), and No. 702 Research Institute, China Ship Scientific Research Centre (CSRRC). The uncrewed warship is designed for potential use for the People's Liberation Army Navy and export customers.
The HAL Combat Air Teaming System (CATS) is an Indian unmanned and manned combat aircraft air teaming system being developed by Hindustan Aeronautics Limited (HAL). The system will consist of a manned fighter aircraft acting as "mothership" of the system and a set of swarming UAVs and UCAVs governed by the mothership aircraft. A twin-seated HAL Tejas is likely to be the mothership aircraft. Various other sub components of the system are currently under development and will be jointly produced by HAL, National Aerospace Laboratories (NAL), Defence Research and Development Organisation (DRDO) and Newspace Research & Technologies.
Shield AI is an American aerospace and arms technology company based in San Diego, California. It develops artificial intelligence-powered fighter pilots, drones, and technology for military operations. Its clients include the United States Special Operations Command, US Air Force, US Marine Corps, US Navy and several international militaries.
A loyal wingman is a proposed type of unmanned combat air vehicle (UCAV) which incorporates artificial intelligence (AI) and is capable of collaborating with the next generation of manned combat aircraft, including sixth-generation fighters and bombers such as the Northrop Grumman B-21 Raider. Also unlike the conventional UCAV, the loyal wingman is expected to be capable of surviving on the battlefield but to be significantly lower-cost than a manned aircraft with similar capabilities. In the US, the concept is known as the collaborative combat aircraft (CCA).
Brave1 is a Government of Ukraine platform to bring together innovative companies with ideas and developments that can be used in the defense of Ukraine, launched on 26 April 2023.