The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons. [2] [3]
First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. [4] Several countries including Israel[ citation needed ], Russia, [5] South Korea[ citation needed ], the United States, [6] and the United Kingdom [7] oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient regulation for this area.
In December 2018, a global Ipsos poll quantified growing public opposition to fully autonomous weapons. It found that 61% of adults surveyed across 26 countries oppose the use of lethal autonomous weapons systems. Two-thirds of those opposed thought these weapons would “cross a moral line because machines should not be allowed to kill," and more than half said the weapons would be “unaccountable." [8] A similar study across 23 countries was conducted in January 2017, which showed 56% of respondents were opposed to the use of these weapons. [9]
In November 2018, the United Nations Secretary-General António Guterres called for a ban on killer robots, stating, "For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law." [10]
In July 2018, over 200 technology companies and 3,000 individuals signed a public pledge to "not participate nor support the development, manufacture, trade, or use of lethal autonomous weapons." [11] In July 2015, over 1,000 experts in artificial intelligence signed on to a letter warning of the threat of an arms race in military artificial intelligence and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. [12] [13]
In June 2018, Kate Conger, then a journalist for Gizmodo and now with the New York Times, revealed Google's involvement in Project Maven, a US Department of Defense-funded program that sought to autonomously process video footage shot by surveillance drones. [14] Several Google employees resigned over the project, and 4,000 other employees sent a letter to Sundar Pichai, the company's chief executive, protesting Google's involvement in the project and demanding that Google not "build warfare technology." [15] Facing internal pressure and public scrutiny, Google released a set of Ethical Principles for AI which included a pledge to not develop artificial intelligence for use in weapons and promised not to renew the Maven contract after it expires in 2019. [16]
The campaign won the Ypres Peace Prize in 2020 [17] [18] and was nominated for the 2021 Nobel Peace Prize by Norwegian MP Audun Lysbakken. [19] [20]
Stop Killer Robots are due to release a documentary called Immoral Code [21] in May 2022 on the subject of automation and killer robots. The film is due to premiere at Prince Charles Cinema in London's Leicester Square and examines whether there are situations where it’s morally and socially acceptable to take life, and importantly - would a computer know the difference?
The full membership list of the Campaign to Stop Killer Robots is available on their website. [22]
This section needs to be updated. The reason given is: There is no information about countries voting against killer robots beyond 2018.(July 2022) |
An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.
Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The United Nations Convention on Certain Conventional Weapons, concluded at Geneva on October 10, 1980, and entered into force in December 1983, seeks to prohibit or restrict the use of certain conventional weapons which are considered excessively injurious or whose effects are indiscriminate. The full title is Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. The convention covers land mines, booby traps, incendiary devices, blinding laser weapons and clearance of explosive remnants of war.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61. He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council of the Association for the Advancement of Artificial Intelligence.
Waymo LLC, formerly known as the Google Self-Driving Car Project, is an American autonomous driving technology company headquartered in Mountain View, California. It is a subsidiary of Alphabet Inc.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The International Committee for Robot Arms Control (ICRAC) is a "not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons." It is concerned about the dangers that autonomous military robots, or lethal autonomous weapons, pose to peace and international security and to civilians in war.
Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.
A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.
Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Jerry Kaplan, Michal Kosinski, D. Scott Phoenix, Hiroshi Ishiguro, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.
Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, and a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.: "Moral Machines: Teaching Robots Right from Wrong" (2010) and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach discusses his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution, in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).