Campaign to Stop Killer Robots

Last updated
The Campaign to Stop Killer Robots launched in London in April 2013. Campaign to Stop Killer Robots.jpg
The Campaign to Stop Killer Robots launched in London in April 2013.
J Williams is a renowned American activist (Nobel Peace Prize in 1997 for her work toward the banning and clearing of anti-personnel mines). On TedX she calls here for a preventive and total ban on lethal autonomous weapons systems (LAWS) Photo Credits Ganesh Vernekar TedX (47888848161) CCBYSA.jpg
J Williams is a renowned American activist (Nobel Peace Prize in 1997 for her work toward the banning and clearing of anti-personnel mines). On TedX she calls here for a preventive and total ban on lethal autonomous weapons systems (LAWS)

The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons. [2] [3]

Contents

History

First launched in April 2013, the Campaign to Stop Killer Robots has urged governments and the United Nations to issue policy to outlaw the development of lethal autonomous weapons systems, also known as LAWS. [4] Several countries including Israel[ citation needed ], Russia, [5] South Korea[ citation needed ], the United States, [6] and the United Kingdom [7] oppose the call for a preemptive ban, and believe that existing international humanitarian law is sufficient enough regulation for this area.

In December 2018, a global Ipsos poll quantified growing public opposition to fully autonomous weapons. It found that 61% of adults surveyed across 26 countries oppose the use of lethal autonomous weapons systems. Two-thirds of those opposed thought these weapons would “cross a moral line because machines should not be allowed to kill," and more than half said the weapons would be “unaccountable." [8] A similar study across 23 countries was conducted in January 2017, which showed 56% of respondents were opposed to the use of these weapons. [9]

In November 2018, the United Nations Secretary-General António Guterres called for a ban on killer robots, stating, "For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law." [10]

In July 2018, over 200 technology companies and 3,000 individuals signed a public pledge to "not participate nor support the development, manufacture, trade, or use of lethal autonomous weapons." [11] In July 2015, over 1,000 experts in artificial intelligence signed on to a letter warning of the threat of an arms race in military artificial intelligence and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. [12] [13]

In June 2018, Kate Conger, then a journalist for Gizmodo and now with the New York Times, revealed Google's involvement in Project Maven, a US Department of Defense-funded program that sought to autonomously process video footage shot by surveillance drones. [14] Several Google employees resigned over the project, and 4,000 other employees sent a letter to Sundar Pichai, the company's chief executive, protesting Google's involvement in the project and demanding that Google not "build warfare technology." [15] Facing internal pressure and public scrutiny, Google released a set of Ethical Principles for AI which included a pledge to not develop artificial intelligence for use in weapons and promised not to renew the Maven contract after it expires in 2019. [16]

The campaign won the Ypres Peace Prize in 2020 [17] [18] and was nominated for the 2021 Nobel Peace Prize by Norwegian MP Audun Lysbakken. [19] [20]

Stop Killer Robots are due to release a documentary called Immoral Code [21] in May 2022 on the subject of automation and killer robots. The film is due to premiere at Prince Charles Cinema in London's Leicester Square and examines whether there are situations where it’s morally and socially acceptable to take life, and importantly - would a computer know the difference?

Steering committee members

The full membership list of the Campaign to Stop Killer Robots is available on their website. [22]

Countries calling for a prohibition on fully autonomous weapons

  1. Pakistan on 30 May 2013 [23]
  2. Ecuador on 13 May 2014 [24]
  3. Egypt on 13 May 2014 [25]
  4. Holy See on 13 May 2014 [26]
  5. Cuba on 16 May 2014
  6. Ghana on 16 April 2015 [27]
  7. Bolivia on 17 April 2015 
  8. State of Palestine on 13 November 2015 
  9. Zimbabwe on 12 November 2015 [28]
  10. Algeria on 11 April 2016 [29]
  11. Costa Rica on 11 April 2016 [30]
  12. Mexico on 13 April 2016 [31]
  13. Chile on 14 April 2016 [32]
  14. Nicaragua on 14 April 2016
  15. Panama on 12 December 2016
  16. Peru on 12 December 2016
  17. Argentina on 12 December 2016
  18. Venezuela on 13 December 2016
  19. Guatemala on 13 December 2016
  20. Brazil on 13 November 2017
  21. Iraq on 13 November 2017
  22. Uganda on 17 November 2017
  23. Austria on 9 April 2018
  24. Djibouti on 13 April 2018
  25. Colombia on 13 April 2018
  26. El Salvador on 22 November 2018
  27. Morocco on 22 November 2018 [33]

See also

Related Research Articles

An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.

robots.txt Internet protocol

robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.

<span class="mw-page-title-main">Military robot</span> Robotic devices designed for military applications

Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Toby Walsh</span>

Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61. He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council of the Association for the Advancement of Artificial Intelligence.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.

The International Committee for Robot Arms Control (ICRAC) is a "not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons." It is concerned about the dangers that autonomous military robots, or lethal autonomous weapons, pose to peace and international security and to civilians in war.

<i>Slaughterbots</i> 2017 film

Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.

References

  1. "Killer Robots" . Retrieved 2019-02-22.
  2. Horowitz, Michael; Scharre, Paul (19 November 2014). "Do Killer Robots Save Lives?". Politico. Archived from the original on 22 August 2015. Retrieved 14 April 2015.
  3. Baum, Seth (22 February 2015). "Stopping killer robots and other future threats". Bulletin of the Atomic Scientists. Archived from the original on 19 June 2017. Retrieved 14 April 2015.
  4. McVeigh, Tracey (23 February 2013). "Killer robots must be stopped, say campaigners". The Guardian . Retrieved 14 April 2015.
  5. KLARE, MICHAEL (2018). "U.S., Russia Impede Steps to Ban 'Killer Robots'". Arms Control Today. 48 (8): 31–33. ISSN   0196-125X. JSTOR   90025262.
  6. KLARE, MICHAEL (2018). "U.S., Russia Impede Steps to Ban 'Killer Robots'". Arms Control Today. 48 (8): 31–33. ISSN   0196-125X. JSTOR   90025262.
  7. Bowcott, Owen (28 July 2015). "UK opposes international ban on developing 'killer robots'". The Guardian . Retrieved 28 July 2015.
  8. "Six in Ten (61%) Respondents Across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems". Ipsos. Retrieved 2019-02-22.
  9. "Three in Ten Americans Support Using Autonomous Weapons". Ipsos. February 7, 2017. Retrieved February 22, 2019.
  10. "Remarks at "Web Summit"". United Nations Secretary-General. 2018-11-08. Retrieved 2019-02-22.
  11. "Lethal Autonomous Weapons Pledge". Future of Life Institute. 6 June 2018. Retrieved 2019-02-22.
  12. Gibbs, Samuel (27 July 2015). "Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons". The Guardian . Retrieved 28 July 2015.
  13. Zakrzewski, Cat (27 July 2015). "Musk, Hawking Warn of Artificial Intelligence Weapons". The Wall Street Journal . Retrieved 28 July 2015.
  14. Conger, Kate (6 March 2018). "Google Is Helping the Pentagon Build AI for Drones". Gizmodo. Retrieved 2019-02-22.
  15. Shane, Scott; Wakabayashi, Daisuke (2018-04-04). "'The Business of War': Google Employees Protest Work for the Pentagon". The New York Times. ISSN   0362-4331 . Retrieved 2019-02-22.
  16. "AI at Google: our principles". Google. 2018-06-07. Retrieved 2019-02-22.
  17. "Children Vote to Stop Killer Robots". Human Rights Watch. 2020-06-09. Retrieved 2022-04-04.
  18. "Swiss Philanthropy Foundation"Campaign to Stop Killer Robots" wins the Ypres Peace Prize 2020 - Swiss Philanthropy Foundation". www.swissphilanthropy.ch. Retrieved 2022-04-04.
  19. "Flere fredsprisforslag før fristen gikk ut". Aftenposten . Norwegian News Agency. 31 January 2021.
  20. "Hektisk nomineringsaktivitet før fredsprisfrist". Dagsavisen . 31 January 2021.
  21. "Immoral Code - A film by Stop Killer Robots". www.immoralcode.io. Retrieved 2022-04-04.
  22. "The Campaign to Stop Killer Robots".
  23. "Statement by Pakistan" (PDF).
  24. "Statement of Ecuador" (PDF).
  25. "Statement of Egypt" (PDF).
  26. "Statement of the Holy See" (PDF).
  27. "Statement of Ghana" (PDF).
  28. "Statement of Zimbabwe" (PDF).
  29. "Statement of Algeria" (PDF).
  30. "Statement of Costa Rica" (PDF).
  31. "Statement of Mexico" (PDF).
  32. "Statement of Chile" (PDF).
  33. "Statement by Morocco" (PDF).