Campaign to Stop Killer Robots

Last updated
The Campaign to Stop Killer Robots launched in London in April 2013. Campaign to Stop Killer Robots.jpg
The Campaign to Stop Killer Robots launched in London in April 2013.
J Williams is a renowned American activist (Nobel Peace Prize in 1997 for her work toward the banning and clearing of anti-personnel mines). On TedX she calls here for a preventive and total ban on lethal autonomous weapons systems (LAWS) Photo Credits Ganesh Vernekar TedX (47888848161) CCBYSA.jpg
J Williams is a renowned American activist (Nobel Peace Prize in 1997 for her work toward the banning and clearing of anti-personnel mines). On TedX she calls here for a preventive and total ban on lethal autonomous weapons systems (LAWS)

The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons. [2] [3]

Contents

History

First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. [4] Several countries including Israel[ citation needed ], Russia, [5] South Korea[ citation needed ], the United States, [6] and the United Kingdom [7] oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient regulation for this area.

In December 2018, a global Ipsos poll quantified growing public opposition to fully autonomous weapons. It found that 61% of adults surveyed across 26 countries oppose the use of lethal autonomous weapons systems. Two-thirds of those opposed thought these weapons would “cross a moral line because machines should not be allowed to kill," and more than half said the weapons would be “unaccountable." [8] A similar study across 23 countries was conducted in January 2017, which showed 56% of respondents were opposed to the use of these weapons. [9]

In November 2018, the United Nations Secretary-General António Guterres called for a ban on killer robots, stating, "For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law." [10]

In July 2018, over 200 technology companies and 3,000 individuals signed a public pledge to "not participate nor support the development, manufacture, trade, or use of lethal autonomous weapons." [11] In July 2015, over 1,000 experts in artificial intelligence signed on to a letter warning of the threat of an arms race in military artificial intelligence and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. [12] [13]

In June 2018, Kate Conger, then a journalist for Gizmodo and now with the New York Times, revealed Google's involvement in Project Maven, a US Department of Defense-funded program that sought to autonomously process video footage shot by surveillance drones. [14] Several Google employees resigned over the project, and 4,000 other employees sent a letter to Sundar Pichai, the company's chief executive, protesting Google's involvement in the project and demanding that Google not "build warfare technology." [15] Facing internal pressure and public scrutiny, Google released a set of Ethical Principles for AI which included a pledge to not develop artificial intelligence for use in weapons and promised not to renew the Maven contract after it expires in 2019. [16]

The campaign won the Ypres Peace Prize in 2020 [17] [18] and was nominated for the 2021 Nobel Peace Prize by Norwegian MP Audun Lysbakken. [19] [20]

Stop Killer Robots are due to release a documentary called Immoral Code [21] in May 2022 on the subject of automation and killer robots. The film is due to premiere at Prince Charles Cinema in London's Leicester Square and examines whether there are situations where it’s morally and socially acceptable to take life, and importantly - would a computer know the difference?

Steering committee members

The full membership list of the Campaign to Stop Killer Robots is available on their website. [22]

Countries calling for a prohibition on fully autonomous weapons

  1. Pakistan on 30 May 2013 [23]
  2. Ecuador on 13 May 2014 [24]
  3. Egypt on 13 May 2014 [25]
  4. Holy See on 13 May 2014 [26]
  5. Cuba on 16 May 2014
  6. Ghana on 16 April 2015 [27]
  7. Bolivia on 17 April 2015 
  8. State of Palestine on 13 November 2015 
  9. Zimbabwe on 12 November 2015 [28]
  10. Algeria on 11 April 2016 [29]
  11. Costa Rica on 11 April 2016 [30]
  12. Mexico on 13 April 2016 [31]
  13. Chile on 14 April 2016 [32]
  14. Nicaragua on 14 April 2016
  15. Panama on 12 December 2016
  16. Peru on 12 December 2016
  17. Argentina on 12 December 2016
  18. Venezuela on 13 December 2016
  19. Guatemala on 13 December 2016
  20. Brazil on 13 November 2017
  21. Iraq on 13 November 2017
  22. Uganda on 17 November 2017
  23. Austria on 9 April 2018
  24. Djibouti on 13 April 2018
  25. Colombia on 13 April 2018
  26. El Salvador on 22 November 2018
  27. Morocco on 22 November 2018 [33]

See also

Related Research Articles

An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.

<span class="mw-page-title-main">Military robot</span> Robotic devices designed for military applications

Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Convention on Certain Conventional Weapons</span> Arms control treaty

The United Nations Convention on Certain Conventional Weapons, concluded at Geneva on October 10, 1980, and entered into force in December 1983, seeks to prohibit or restrict the use of certain conventional weapons which are considered excessively injurious or whose effects are indiscriminate. The full title is Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. The convention covers land mines, booby traps, incendiary devices, blinding laser weapons and clearance of explosive remnants of war.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Toby Walsh</span>

Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61. He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council of the Association for the Advancement of Artificial Intelligence.

<span class="mw-page-title-main">Waymo</span> Autonomous car technology company

Waymo LLC, formerly known as the Google Self-Driving Car Project, is an American autonomous driving technology company headquartered in Mountain View, California. It is a subsidiary of Alphabet Inc.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

The International Committee for Robot Arms Control (ICRAC) is a "not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons." It is concerned about the dangers that autonomous military robots, or lethal autonomous weapons, pose to peace and international security and to civilians in war.

<i>Slaughterbots</i> 2017 film

Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

<i>Do You Trust This Computer?</i> 2018 American film

Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Jerry Kaplan, Michal Kosinski, D. Scott Phoenix, Hiroshi Ishiguro, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.

<span class="mw-page-title-main">Wendell Wallach</span> Bioethicist and author

Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, and a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.: "Moral Machines: Teaching Robots Right from Wrong" (2010) and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach discusses his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution, in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).

References

  1. "Killer Robots" . Retrieved 2019-02-22.
  2. Horowitz, Michael; Scharre, Paul (19 November 2014). "Do Killer Robots Save Lives?". Politico. Archived from the original on 22 August 2015. Retrieved 14 April 2015.
  3. Baum, Seth (22 February 2015). "Stopping killer robots and other future threats". Bulletin of the Atomic Scientists. Archived from the original on 19 June 2017. Retrieved 14 April 2015.
  4. McVeigh, Tracey (23 February 2013). "Killer robots must be stopped, say campaigners". The Guardian . Retrieved 14 April 2015.
  5. KLARE, MICHAEL (2018). "U.S., Russia Impede Steps to Ban 'Killer Robots'". Arms Control Today. 48 (8): 31–33. ISSN   0196-125X. JSTOR   90025262.
  6. KLARE, MICHAEL (2018). "U.S., Russia Impede Steps to Ban 'Killer Robots'". Arms Control Today. 48 (8): 31–33. ISSN   0196-125X. JSTOR   90025262.
  7. Bowcott, Owen (28 July 2015). "UK opposes international ban on developing 'killer robots'". The Guardian . Retrieved 28 July 2015.
  8. "Six in Ten (61%) Respondents Across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems". Ipsos. Retrieved 2019-02-22.
  9. "Three in Ten Americans Support Using Autonomous Weapons". Ipsos. February 7, 2017. Retrieved February 22, 2019.
  10. "Remarks at "Web Summit"". United Nations Secretary-General. 2018-11-08. Retrieved 2019-02-22.
  11. "Lethal Autonomous Weapons Pledge". Future of Life Institute. 6 June 2018. Retrieved 2019-02-22.
  12. Gibbs, Samuel (27 July 2015). "Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons". The Guardian . Retrieved 28 July 2015.
  13. Zakrzewski, Cat (27 July 2015). "Musk, Hawking Warn of Artificial Intelligence Weapons". The Wall Street Journal . Retrieved 28 July 2015.
  14. Conger, Kate (6 March 2018). "Google Is Helping the Pentagon Build AI for Drones". Gizmodo. Retrieved 2019-02-22.
  15. Shane, Scott; Wakabayashi, Daisuke (2018-04-04). "'The Business of War': Google Employees Protest Work for the Pentagon". The New York Times. ISSN   0362-4331 . Retrieved 2019-02-22.
  16. "AI at Google: our principles". Google. 2018-06-07. Retrieved 2019-02-22.
  17. "Children Vote to Stop Killer Robots". Human Rights Watch. 2020-06-09. Retrieved 2022-04-04.
  18. "Swiss Philanthropy Foundation"Campaign to Stop Killer Robots" wins the Ypres Peace Prize 2020 - Swiss Philanthropy Foundation". www.swissphilanthropy.ch. Retrieved 2022-04-04.
  19. "Flere fredsprisforslag før fristen gikk ut". Aftenposten . Norwegian News Agency. 31 January 2021.
  20. "Hektisk nomineringsaktivitet før fredsprisfrist". Dagsavisen . 31 January 2021.
  21. "Immoral Code - A film by Stop Killer Robots". www.immoralcode.io. Retrieved 2022-04-04.
  22. "The Campaign to Stop Killer Robots".
  23. "Statement by Pakistan" (PDF).
  24. "Statement of Ecuador" (PDF).
  25. "Statement of Egypt" (PDF).
  26. "Statement of the Holy See" (PDF).
  27. "Statement of Ghana" (PDF).
  28. "Statement of Zimbabwe" (PDF).
  29. "Statement of Algeria" (PDF).
  30. "Statement of Costa Rica" (PDF).
  31. "Statement of Mexico" (PDF).
  32. "Statement of Chile" (PDF).
  33. "Statement by Morocco" (PDF).