Artificial intelligence arms race

Last updated

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, [1] [2] driven by increasing geopolitical and military tensions.

Contents

An AI arms race is sometimes placed in the context of an AI Cold War between the United States, Russia, and China. [3]

Terminology

Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. [4] LAWS have colloquially been called "slaughterbots" or "killer robots". Broadly, any competition for superior AI is sometimes framed as an "arms race". [5] [6] Advantages in military AI overlap with advantages in other sectors, as countries pursue both economic and military advantages. [7]

History

In 2014, AI specialist Steve Omohundro warned that "An autonomous weapons arms race is already taking place". [8] According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015. [9] [10]

China became a top player in artificial intelligence research in the 2010s. According to the Financial Times , in 2016, for the first time, China published more AI papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union. [11] 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese. [12] Eric Schmidt, the former chairman of Alphabet, has predicted China will be the leading country in AI by 2025. [13]

AAAI presenters [12]
Country20122017
United States41%34%
China10%23%
United Kingdom5%13%

Risks

One risk concerns the AI race itself, whether or not the race is won by any one group. There are strong incentives for development teams to cut corners with regard to the safety of the system, which may result in increased algorithmic bias. [14] [15] This is in part due to the perceived advantage of being the first to develop advanced AI technology. One team appearing to be on the brink of a breakthrough can encourage other teams to take shortcuts, ignore precautions and deploy a system that is less ready. Some argue that using "race" terminology at all in this context can exacerbate this effect. [16]

Another potential danger of an AI arms race is the possibility of losing control of the AI systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk. [16] In 2023, a United States Air Force official reportedly said that during a computer test, a simulated AI drone killed the human character operating it. The USAF later said the official had misspoken and that it never conducted such simulations. [17]

A third risk of an AI arms race is whether or not the race is actually won by one group. The concern is regarding the consolidation of power and technological advantage in the hands of one group. [16] A US government report argued that "AI-enabled capabilities could be used to threaten critical infrastructure, amplify disinformation campaigns, and wage war" [18] :1, and that "global stability and nuclear deterrence could be undermined". [18] :11

Stances toward military artificial intelligence

Russia

Putin (seated, center) at National Knowledge Day, 2017 Russia President Vladimir Putin at National Knowledge Day, 2017.jpg
Putin (seated, center) at National Knowledge Day, 2017

Russian General Viktor Bondarev, commander-in-chief of the Russian air force, stated that as early as February 2017, Russia was working on AI-guided missiles that could decide to switch targets mid-flight. [19] The Military-Industrial Commission of Russia has approved plans to derive 30 percent of Russia's combat power from remote controlled and AI-enabled robotic platforms by 2030. [20] Reports by state-sponsored Russian media on potential military uses of AI increased in mid-2017. [21] In May 2017, the CEO of Russia's Kronstadt Group, a defense contractor, stated that "there already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact", and that it is inevitable that "swarms of drones" will one day fly over combat zones. [22] Russia has been testing several autonomous and semi-autonomous combat systems, such as Kalashnikov's "neural net" combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention. [23]

In September 2017, during a National Knowledge Day address to over a million students in 16,000 Russian schools, Russian President Vladimir Putin stated "Artificial intelligence is the future, not only for Russia but for all humankind... Whoever becomes the leader in this sphere will become the ruler of the world". Putin also said it would be better to prevent any single actor achieving a monopoly, but that if Russia became the leader in AI, they would share their "technology with the rest of the world, like we are doing now with atomic and nuclear technology". [24] [25] [26]

Russia is establishing a number of organizations devoted to the development of military AI. In March 2018, the Russian government released a 10-point AI agenda, which calls for the establishment of an AI and Big Data consortium, a Fund for Analytical Algorithms and Programs, a state-backed AI training and education program, a dedicated AI lab, and a National Center for Artificial Intelligence, among other initiatives. [27] In addition, Russia recently created a defense research organization, roughly equivalent to DARPA, dedicated to autonomy and robotics called the Foundation for Advanced Studies, and initiated an annual conference on "Robotization of the Armed Forces of the Russian Federation." [28] [29]

The Russian military has been researching a number of AI applications, with a heavy emphasis on semiautonomous and autonomous vehicles. In an official statement on November 1, 2017, Viktor Bondarev, chairman of the Federation Council's Defense and Security Committee, stated that "artificial intelligence will be able to replace a soldier on the battlefield and a pilot in an aircraft cockpit" and later noted that "the day is nearing when vehicles will get artificial intelligence." [30] Bondarev made these remarks in close proximity to the successful test of Nerehta, an crewless Russian ground vehicle that reportedly "outperformed existing [crewed] combat vehicles." Russia plans to use Nerehta as a research and development platform for AI and may one day deploy the system in combat, intelligence gathering, or logistics roles. [31] Russia has also reportedly built a combat module for crewless ground vehicles that is capable of autonomous target identification—and, potentially, target engagement—and plans to develop a suite of AI-enabled autonomous systems. [32] [33] [29]

In addition, the Russian military plans to incorporate AI into crewless aerial, naval, and undersea vehicles and is currently developing swarming capabilities. [28] It is also exploring innovative uses of AI for remote sensing and electronic warfare, including adaptive frequency hopping, waveforms, and countermeasures. [34] [35] Russia has also made extensive use of AI technologies for domestic propaganda and surveillance, as well as for information operations directed against the United States and U.S. allies. [36] [37] [29]

The Russian government has strongly rejected any ban on lethal autonomous weapon systems, suggesting that such an international ban could be ignored. [38] [39]

China

China is pursuing a strategic policy of military-civil fusion on AI for global technological supremacy. [18] [40] According to a February 2019 report by Gregory C. Allen of the Center for a New American Security, China's leadership – including paramount leader Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition. [7] Chinese military officials have said that their goal is to incorporate commercial AI technology to "narrow the gap between the Chinese military and global advanced powers." [7] The close ties between Silicon Valley and China, and the open nature of the American research community, has made the West's most advanced AI technology easily available to China; in addition, Chinese industry has numerous home-grown AI accomplishments of its own, such as Baidu passing a notable Chinese-language speech recognition capability benchmark in 2015. [41] As of 2017, Beijing's roadmap aims to create a $150 billion AI industry by 2030. [11] Before 2013, Chinese defense procurement was mainly restricted to a few conglomerates; however, as of 2017, China often sources sensitive emerging technology such as drones and artificial intelligence from private start-up companies. [42] An October 2021 report by the Center for Security and Emerging Technology found that "Most of the [Chinese military]'s AI equipment suppliers are not state-owned defense enterprises, but private Chinese tech companies founded after 2010." [43] The report estimated that Chinese military spending on AI exceeded $1.6 billion each year. [43] The Japan Times reported in 2018 that annual private Chinese investment in AI is under $7 billion per year. AI startups in China received nearly half of total global investment in AI startups in 2017; the Chinese filed for nearly five times as many AI patents as did Americans. [44]

China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U. N. Security Council to broach the issue. [45] In 2018, Xi called for greater international cooperation in basic AI research. [46] Chinese officials have expressed concern that AI such as drones could lead to accidental war, especially in the absence of international norms. [47] In 2019, former United States Secretary of Defense Mark Esper lashed out at China for selling drones capable of taking life with no human oversight. [48]

United States

The Sea Hunter, an autonomous US warship, 2016 Sea Hunter gets underway on the Willamette River following a christening ceremony in Portland, Ore. (25702146834).jpg
The Sea Hunter , an autonomous US warship, 2016

In 2014, former Secretary of Defense Chuck Hagel posited the "Third Offset Strategy" that rapid advances in artificial intelligence will define the next generation of warfare. [49] According to data science and analytics firm Govini, the U.S. Department of Defense (DoD) increased investment in artificial intelligence, big data and cloud computing from $5.6 billion in 2011 to $7.4 billion in 2016. [50] However, the civilian NSF budget for AI saw no increase in 2017. [11] Japan Times reported in 2018 that the United States private investment is around $70 billion per year. [44] The November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence confirmed that AI is critical to US technological military superiority. [18]

The U.S. has many military AI combat programs, such as the Sea Hunter autonomous warship, which is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port. [23] From 2017, a temporary US Department of Defense directive requires a human operator to be kept in the loop when it comes to the taking of human life by autonomous weapons systems. [51] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. [52]

The Joint Artificial Intelligence Center (JAIC) (pronounced "jake") [53] is an American organization on exploring the usage of AI (particularly edge computing), Network of Networks, and AI-enhanced communication, for use in actual combat. [54] [55] [56] [57] It is a subdivision of the United States Armed Forces and was created in June 2018. The organization's stated objective is to "transform the US Department of Defense by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple combat systems; then, ensure the combat Systems and Components have real-time access to ever-improving libraries of data sets and tools." [55]

In 2023 Microsoft pitched the DoD to use DALL-E models to train its battlefield management system. [58] OpenAI, the developer of DALL-E, removed the blanket ban on military and warfare use from its usage policies in January 2024. [59]

Project Maven

Project Maven is a Pentagon project involving using machine learning and engineering talent to distinguish people and objects in drone videos, [60] apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Initially the effort was led by Robert O. Work who was concerned about China's military use of the emerging technology. [61] Reportedly, Pentagon development stops short of acting as an AI weapons system capable of firing on self-designated targets. [62] The project was established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017. [63] Also known as the Algorithmic Warfare Cross Functional Team, [64] it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department". [65] Its chief, U.S. Marine Corps Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects." [66] Project Maven has been noted by allies, such as Australia's Ian Langford, for the ability to identify adversaries by harvesting data from sensors on UAVs and satellite. [67] At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department [of Defense] through its "rapid acquisition authorities" for about "the next 36 months". [68]

United Kingdom

In 2015, the UK government opposed a ban on lethal autonomous weapons, stating that "international humanitarian law already provides sufficient regulation for this area", but that all weapons employed by UK armed forces would be "under human oversight and control". [69]

Israel

Israel makes extensive use of AI for military applications specially during the Israel-Hamas war. The main AI systems used for target identification are the Gospel and Lavender. Lavender developed by the Unit 8200 identifies and creates a database of individuals mostly low-ranking militants of Hamas and the Palestinian Islamic Jihad and has a 90% accuracy rate and a database of tens of thousands. The Gospel in comparisons recommended buildings and structures rather than individuals. The acceptable collateral damage and the type of weapon used to eliminate the target is decided by IDF members and could track militants even when at home. [70]

Israel's Harpy anti-radar "fire and forget" drone is designed to be launched by ground troops, and autonomously fly over an area to find and destroy radar that fits pre-determined criteria. [71] The application of artificial intelligence is also expected to be advanced in crewless ground systems and robotic vehicles such as the Guardium MK III and later versions. [72] These robotic vehicles are used in border defense.

South Korea

The South Korean Super aEgis II machine gun, unveiled in 2010, sees use both in South Korea and in the Middle East. It can identify, track, and destroy a moving target at a range of 4 km. While the technology can theoretically operate without human intervention, in practice safeguards are installed to require manual input. A South Korean manufacturer states, "Our weapons don't sleep, like humans must. They can see in the dark, like humans can't. Our technology therefore plugs the gaps in human capability", and they want to "get to a place where our software can discern whether a target is friend, foe, civilian or military". [73]

European Union

The European Parliament holds the position that humans must have oversight and decision-making power over lethal autonomous weapons. [74] However, it is up to each member state of the European Union to determine their stance on the use of autonomous weapons and the mixed stances of the member states is perhaps the greatest hindrance to the European Union's ability to develop autonomous weapons. Some members such as France, Germany, Italy, and Sweden are developing lethal autonomous weapons. Some members remain undecided about the use of autonomous military weapons and Austria has even called to ban the use of such weapons. [75]

Some EU member states have developed and are developing automated weapons. Germany has developed an active protection system, the Active Defense System, that can respond to a threat with complete autonomy in less than a millisecond. [75] [76] Italy plans to incorporate autonomous weapons systems into its future military plans. [75]

Proposals for international regulation

The international regulation of autonomous weapons is an emerging issue for international law. [77] AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process. [1] [2] As early as 2007, scholars such as AI professor Noel Sharkey have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions". [78] [79]

Miles Brundage of the University of Oxford has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends". [80] Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons; [81] [82] however, at a November 2017 session of the UN Convention on Certain Conventional Weapons (CCW), diplomats could not agree even on how to define such weapons. [83] The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect. [84] As of 2019, 26 heads of state and 21 Nobel Peace Prize laureates have backed a ban on autonomous weapons. [85] However, as of 2022, most major powers continue to oppose a ban on autonomous weapons. [86]

Many experts believe attempts to completely ban killer robots are likely to fail, [87] in part because detecting treaty violations would be extremely difficult. [88] [89] A 2017 report from Harvard's Belfer Center predicts that AI has the potential to be as transformative as nuclear weapons. [80] [90] [91] The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI dead man's switch to a nuclear arsenal. [91]

Other reactions to autonomous weapons

A 2015 open letter by the Future of Life Institute calling for the prohibition of lethal autonomous weapons systems has been signed by over 26,000 citizens, including physicist Stephen Hawking, Tesla magnate Elon Musk, Apple's Steve Wozniak and Twitter co-founder Jack Dorsey, and over 4,600 artificial intelligence researchers, including Stuart Russell, Bart Selman and Francesca Rossi. [92] [83] The Future of Life Institute has also released two fictional films, Slaughterbots (2017) and Slaughterbots - if human: kill() (2021), which portray threats of autonomous weapons and promote a ban, both of which went viral.

Professor Noel Sharkey of the University of Sheffield argues that autonomous weapons will inevitably fall into the hands of terrorist groups such as the Islamic State. [93]

Disassociation

Many Western tech companies avoid being associated too closely with the U.S. military, for fear of losing access to China's market. [41] Furthermore, some researchers, such as DeepMind CEO Demis Hassabis, are ideologically opposed to contributing to military work. [94]

For example, in June 2018, company sources at Google said that top executive Diane Greene told staff that the company would not follow-up Project Maven after the current contract expired in March 2019. [60]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.

<span class="mw-page-title-main">Military robot</span> Robotic devices designed for military applications

Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology that has numerous applications, including language translation, image recognition, decision-making, credit scoring and e-commerce. AI includes the development of machines which can perceive, understand, act and learn a scientific discipline.

<span class="mw-page-title-main">Toby Walsh</span>

Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61. He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council of the Association for the Advancement of Artificial Intelligence.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Arms race</span> Competition between two or more parties to have superior armed forces

An arms race occurs when two or more groups compete in military superiority. It consists of a competition between two or more states to have superior armed forces, concerning production of weapons, the growth of a military, and the aim of superior military technology. Unlike a sporting race, which constitutes a specific event with winning interpretable as the outcome of a singular project, arms races constitute spiralling systems of on-going and potentially open-ended behavior.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<span class="mw-page-title-main">Campaign to Stop Killer Robots</span> Coalition of organizations

The Campaign to Stop Killer Robots is a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">AI takeover in popular culture</span>

AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

<i>Slaughterbots</i> 2017 film

Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition software to assassinate political opponents based on preprogrammed criteria. It was released by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley. On YouTube, the video quickly went viral, garnering over two million views and was screened at the United Nations Convention on Certain Conventional Weapons meeting in Geneva the same month.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

The JARI USV is an uncrewed surface vehicle developed by the China Shipbuilding Industry Corporation (CSIC), specifically between its No. 716 Research Institute, the Jiangsu Automation Research Institute (JARI), and No. 702 Research Institute, China Ship Scientific Research Centre (CSRRC). The uncrewed warship is designed for potential use for the People's Liberation Army Navy and export customers.

References

  1. 1 2 Geist, Edward Moore (2016-08-15). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists. 72 (5): 318–321. Bibcode:2016BuAtS..72e.318G. doi:10.1080/00963402.2016.1216672. ISSN   0096-3402. S2CID   151967826.
  2. 1 2 Maas, Matthijs M. (2019-02-06). "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons". Contemporary Security Policy. 40 (3): 285–311. doi:10.1080/13523260.2019.1576464. ISSN   1352-3260. S2CID   159310223.
  3. Champion, Marc (12 December 2019). "Digital Cold War". Bloomberg. Archived from the original on 9 July 2021. Retrieved 3 July 2021.
  4. "Homepage". Lethal Autonomous Weapons. 2021-10-20. Archived from the original on 2022-02-17. Retrieved 2022-02-17.
  5. Roff, Heather M. (2019-04-26). "The frame problem: The AI "arms race" isn't one". Bulletin of the Atomic Scientists. 75 (3): 95–98. Bibcode:2019BuAtS..75c..95R. doi:10.1080/00963402.2019.1604836. ISSN   0096-3402. S2CID   150835614.
  6. "For Google, a leg up in the artificial intelligence arms race". Fortune. 2014. Archived from the original on 15 September 2021. Retrieved 11 April 2020.
  7. 1 2 3 Allen, Gregory. "Understanding China's AI Strategy". Center for a New American Security. Archived from the original on 17 March 2019. Retrieved 15 March 2019.
  8. Markoff, John (11 November 2014). "Fearing Bombs That Can Pick Whom to Kill". The New York Times. Archived from the original on 27 November 2021. Retrieved 11 January 2018.
  9. "Getting to grips with military robotics". The Economist. 25 January 2018. Archived from the original on 7 February 2018. Retrieved 7 February 2018.
  10. "Autonomous Systems: Infographic". www.siemens.com. Archived from the original on 7 February 2018. Retrieved 7 February 2018.
  11. 1 2 3 "China seeks dominance of global AI industry". Financial Times . 15 October 2017. Archived from the original on 19 September 2019. Retrieved 24 December 2017.
  12. 1 2 Kopf, Dan (2018). "China is rapidly closing the US's lead in AI research". Quartz. Archived from the original on 6 February 2018. Retrieved 7 February 2018.
  13. "The battle for digital supremacy". The Economist. 2018. Archived from the original on 18 March 2018. Retrieved 19 March 2018.
  14. Armstrong, Stuart; Bostrom, Nick; Shulman, Carl (2015-08-01). "Racing to the precipice: a model of artificial intelligence development". AI & Society. 31 (2): 201–206. doi:10.1007/s00146-015-0590-y. ISSN   0951-5666. S2CID   16199902.
  15. Scharre, Paul (18 February 2020). "Killer Apps: The Real Dangers of an AI Arms Race". Archived from the original on 12 March 2020. Retrieved 15 March 2020.
  16. 1 2 3 Cave, Stephen; ÓhÉigeartaigh, Seán S. (2018). "An AI Race for Strategic Advantage". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. New York, New York, USA: ACM Press. p. 2. doi: 10.1145/3278721.3278780 . ISBN   978-1-4503-6012-8.
  17. Xiang, Chloe; Gault, Matthew (1 June 2023). "USAF Official Says He 'Misspoke' About AI Drone Killing Human Operator in Simulated Test". Vice.
  18. 1 2 3 4 Interim Report. Washington, DC: National Security Commission on Artificial Intelligence. 2019. Archived from the original on 2021-09-10. Retrieved 2020-04-04.
  19. "Russia is building a missile that can makes its own decisions". Newsweek. 20 July 2017. Archived from the original on 30 December 2019. Retrieved 24 December 2017.
  20. Walters, Greg (7 September 2017). "Artificial Intelligence Is Poised to Revolutionize Warfare". Seeker. Archived from the original on 7 October 2021. Retrieved 8 May 2022.
  21. "Why Elon Musk is right about the threat posed by Russian artificial intelligence". The Independent. 6 September 2017. Archived from the original on 25 April 2019. Retrieved 24 December 2017.
  22. "Russia is developing autonomous "swarms of drones" it calls an inevitable part of future warfare". Newsweek. 15 May 2017. Archived from the original on 3 June 2019. Retrieved 24 December 2017.
  23. 1 2 Smith, Mark (25 August 2017). "Is 'killer robot' warfare closer than we think?". BBC News. Archived from the original on 24 December 2019. Retrieved 24 December 2017.
  24. "Artificial Intelligence Fuels New Global Arms Race". WIRED. Archived from the original on 24 October 2019. Retrieved 24 December 2017.
  25. Clifford, Catherine (29 September 2017). "In the same way there was a nuclear arms race, there will be a race to build A.I., says tech exec". CNBC. Archived from the original on 15 August 2019. Retrieved 24 December 2017.
  26. Radina Gigova (2 September 2017). "Who Vladimir Putin thinks will rule the world". CNN . Archived from the original on 10 January 2022. Retrieved 22 March 2020.
  27. "Here's How the Russian Military Is Organizing to Develop AI". Defense One. 20 July 2018. Archived from the original on 2020-06-26. Retrieved 2020-05-01.
  28. 1 2 "Red Robots Rising: Behind the Rapid Development of Russian Unmanned Military Systems". The Strategy Bridge. 12 December 2017. Archived from the original on 2020-08-12. Retrieved 2020-05-01.
  29. 1 2 3 Artificial Intelligence and National Security (PDF). Washington, DC: Congressional Research Service. 2019. Archived (PDF) from the original on 2020-05-08. Retrieved 2020-05-01.PD-icon.svg This article incorporates text from this source, which is in the public domain .
  30. Bendett, Samuel (2017-11-08). "Should the U.S. Army Fear Russia's Killer Robots?". The National Interest. Archived from the original on 2020-11-09. Retrieved 2020-05-01.
  31. "Russia Says It Will Field a Robot Tank that Outperforms Humans". Defense One. 8 November 2017. Archived from the original on 2020-08-29. Retrieved 2020-05-01.
  32. Greene, Tristan (2017-07-27). "Russia is developing AI missiles to dominate the new arms race". The Next Web. Archived from the original on 2020-09-21. Retrieved 2020-05-01.
  33. Mizokami, Kyle (2017-07-19). "Kalashnikov Will Make an A.I.-Powered Killer Robot". Popular Mechanics. Archived from the original on 2020-08-02. Retrieved 2020-05-01.
  34. Dougherty, Jill; Jay, Molly. "Russia Tries to Get Smart about Artificial Intelligence". Wilson Quarterly. Archived from the original on 2020-07-25. Retrieved 2020-05-01.
  35. "Russian AI-Enabled Combat: Coming to a City Near You?". War on the Rocks. 2019-07-31. Archived from the original on 2020-06-06. Retrieved 2020-05-01.
  36. Polyakova, Alina (2018-11-15). "Weapons of the weak: Russia and AI-driven asymmetric warfare". Brookings. Archived from the original on 2019-04-06. Retrieved 2020-05-01.
  37. Polyakova, Chris Meserole, Alina (25 May 2018). "Disinformation Wars". Foreign Policy. Archived from the original on 2020-03-08. Retrieved 2020-05-01.{{cite web}}: CS1 maint: multiple names: authors list (link)
  38. "Russia rejects potential UN 'killer robots' ban, official statement says". Institution of Engineering and Technology . 1 December 2017. Archived from the original on 4 November 2019. Retrieved 12 January 2018.
  39. "Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, Russian Federation, November 2017" (PDF). Archived (PDF) from the original on 19 August 2019. Retrieved 12 January 2018.
  40. "Technology, Trade, and Military-Civil Fusion: China's Pursuit of Artificial Intelligence, New Materials, and New Energy | U.S.- CHINA | ECONOMIC and SECURITY REVIEW COMMISSION". www.uscc.gov. Archived from the original on 2020-04-11. Retrieved 2020-04-04.
  41. 1 2 Markoff, John; Rosenberg, Matthew (3 February 2017). "China's Intelligent Weaponry Gets Smarter". The New York Times. Archived from the original on 2 January 2020. Retrieved 24 December 2017.
  42. "China enlists start-ups in high-tech arms race". Financial Times . 9 July 2017. Archived from the original on 14 February 2018. Retrieved 24 December 2017.
  43. 1 2 Fedasiuk, Ryan; Melot, Jennifer; Murphy, Ben (October 2021). "Harnessed Lightning: How the Chinese Military is Adopting Artificial Intelligence". Center for Security and Emerging Technology. Archived from the original on April 21, 2022. Retrieved April 22, 2022.
  44. 1 2 "The artificial intelligence race heats up". The Japan Times . 1 March 2018. Archived from the original on 3 January 2020. Retrieved 5 March 2018.
  45. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Retrieved 24 December 2017.
  46. Pecotic, Adrian (2019). "Whoever Predicts the Future Will Win the AI Arms Race". Foreign Policy. Archived from the original on 16 July 2019. Retrieved 16 July 2019.
  47. Vincent, James (6 February 2019). "China is worried an AI arms race could lead to accidental war". The Verge. Archived from the original on 16 July 2019. Retrieved 16 July 2019.
  48. "Is China exporting killer robots to Mideast?". Asia Times . 2019-11-28. Archived from the original on 2019-12-21. Retrieved 2019-12-21.
  49. "US risks losing AI arms race to China and Russia". CNN. 29 November 2017. Archived from the original on 15 September 2021. Retrieved 24 December 2017.
  50. Davenport, Christian (3 December 2017). "Future wars may depend as much on algorithms as on ammunition, report says". Washington Post. Archived from the original on 15 August 2019. Retrieved 24 December 2017.
  51. "US general warns of out-of-control killer robots". CNN. 18 July 2017. Archived from the original on 15 September 2021. Retrieved 24 December 2017.
  52. United States. Defense Innovation Board. AI principles: recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC   1126650738.
  53. Kelley M. Sayler (June 8, 2021). Defense Primer: Emerging Technologies (PDF) (Report). Congressional Research Service. Archived (PDF) from the original on July 10, 2021. Retrieved July 22, 2021.
  54. "DOD Unveils Its Artificial Intelligence Strategy". U.S. Department of Defense. Archived from the original on 2021-09-02. Retrieved 2021-10-10.
  55. 1 2 "Joint Artificial Intelligence Center". Department of Defense. Archived from the original on June 25, 2020. Retrieved June 26, 2020.
  56. McLeary, Paul (29 June 2018). "Joint Artificial Intelligence Center Created Under DoD CIO". Archived from the original on 10 October 2021. Retrieved 10 October 2021.
  57. Barnett, Jackson (June 19, 2020). "For military AI to reach the battlefield, there are more than just software challenges". FedScoop. Archived from the original on June 26, 2020. Retrieved June 26, 2020.
  58. Biddle, Sam (10 April 2024). "Microsoft Pitched OpenAI's DALL-E as Battlefield Tool for U.S. Military". The Intercept.
  59. Biddle, Sam (12 January 2024). "OpenAI Quietly Deletes Ban on Using ChatGPT for "Military and Warfare"". The Intercept.
  60. 1 2 "Google 'to end' Pentagon Artificial Intelligence project". BBC News . 2 June 2018. Archived from the original on 2 June 2018. Retrieved 3 June 2018.
  61. Cade Metz. (15 March 2018). "Pentagon Wants Silicon Valley's Help on A.I.". NY Times website Archived 2022-04-08 at the Wayback Machine Retrieved 8 March 2022.
  62. "Report: Palantir took over Project Maven, the military AI program too unethical for Google". The Next Web. 11 December 2020. Archived from the original on 24 January 2020. Retrieved 17 January 2020.
  63. Robert O. Work (26 April 2017). "Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven)" (PDF). Archived (PDF) from the original on 21 April 2018. Retrieved 3 June 2018.
  64. "Google employees resign in protest against Air Force's Project Maven". Fedscoop. 14 May 2018. Archived from the original on 15 July 2018. Retrieved 3 June 2018.
  65. Allen, Gregory C. (21 December 2017). "Project Maven brings AI to the fight against ISIS". Bulletin of the Atomic Scientists . Archived from the original on 4 June 2018. Retrieved 3 June 2018.
  66. Ethan Baron (3 June 2018). "Google Backs Off from Pentagon Project After Uproar: Report". Military.com . Mercury.com. Archived from the original on 14 July 2018. Retrieved 3 June 2018.
  67. Skinner, Dan (29 January 2020). "Signature Management in Accelerated Warfare | Close Combat in the 21st Century". The Cove. Archived from the original on 15 July 2023. Retrieved 15 July 2023.
  68. Cheryl Pellerin (21 July 2017). "Project Maven to Deploy Computer Algorithms to War Zone by Year's End". DoD News, Defense Media Activity. United States Department of Defense. Archived from the original on 4 June 2018. Retrieved 3 June 2018.
  69. Gibbs, Samuel (20 August 2017). "Elon Musk leads 116 experts calling for outright ban of killer robots". The Guardian . Archived from the original on 30 December 2019. Retrieved 24 December 2017.
  70. McKernan, Bethan; Davies, Harry (3 April 2024). "'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets". The Guardian. Retrieved 19 May 2024.
  71. "'Killer robots': autonomous weapons pose moral dilemma | World| Breakings news and perspectives from around the globe | DW | 14.11.2017". DW.COM. 14 November 2017. Archived from the original on 11 July 2019. Retrieved 12 January 2018.
  72. Slocombe, Geoff (2015). "Uninhabited Ground Systems (Ugs)". Asia-Pacific Defence Reporter. 41 (7): 28–29.
  73. Parkin, Simon (16 July 2015). "Killer robots: The soldiers that never sleep". BBC. Archived from the original on 4 August 2019. Retrieved 13 January 2018.
  74. "Texts adopted - Autonomous weapon systems - Wednesday, 12 September 2018". www.europarl.europa.eu. Archived from the original on 2021-01-26. Retrieved 2021-01-30.
  75. 1 2 3 Haner, Justin; Garcia, Denise (2019). "The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development". Global Policy. 10 (3): 331–337. doi: 10.1111/1758-5899.12713 . ISSN   1758-5899.
  76. Boulanin, Vincent; Verbruggen, Maaike (2017). Mapping the development of autonomy in weapon systems (PDF). Stockholm International Peace Research Institute. doi:10.13140/rg.2.2.22719.41127. Archived (PDF) from the original on 2021-01-17. Retrieved 2021-01-30.
  77. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Archived from the original on 2020-03-23. Retrieved 2019-09-14.
  78. Sample, Ian (13 November 2017). "Ban on killer robots urgently needed, say scientists". The Guardian. Archived from the original on 24 December 2017. Retrieved 24 December 2017.
  79. Sharkey, Noel (17 August 2007). "Robot wars are a reality". The Guardian. Archived from the original on 6 January 2018. Retrieved 11 January 2018.
  80. 1 2 Simonite, Tom (July 19, 2017). "AI Could Revolutionize War as Much as Nukes". Wired. Archived from the original on 25 November 2021. Retrieved 24 December 2017.
  81. Gibbs, Samuel (20 August 2017). "Elon Musk leads 116 experts calling for outright ban of killer robots". The Guardian. Archived from the original on 30 December 2019. Retrieved 11 January 2018.
  82. Conn, Ariel (August 20, 2017). "An Open Letter to the United Nations Convention on Certain Conventional Weapons". Future of Life Institute. Archived from the original on 6 December 2017. Retrieved 14 January 2018.
  83. 1 2 Boyd, Alan (24 November 2017). "Rise of the killer machines". Asia Times . Retrieved 24 December 2017.
  84. "'Robots are not taking over,' says head of UN body on autonomous weapons". The Guardian. 17 November 2017. Archived from the original on 5 December 2021. Retrieved 14 January 2018.
  85. McDonald, Henry (21 October 2019). "Campaign to stop 'killer robots' takes peace mascot to UN". The Guardian. Archived from the original on 18 January 2022. Retrieved 27 January 2022.
  86. Khan, Jeremy (2021). "The world just blew a 'historic opportunity' to stop killer robots". Fortune. Archived from the original on 31 December 2021. Retrieved 31 December 2021. Several states, including the U.S., Russia, the United Kingdom, India, and Israel, were opposed to any legally binding restrictions... China has supported a binding legal agreement at the CCW, but has also sought to define autonomous weapons so narrowly that much of the A.I.-enabled military equipment it is currently developing would fall outside the scope of such a ban.
  87. Simonite, Tom (22 August 2017). "Sorry, Banning 'Killer Robots' Just Isn't Practical". Wired. Archived from the original on 23 July 2021. Retrieved 14 January 2018.
  88. Antebi, Liran. "Who Will Stop the Robots?." Military and Strategic Affairs 5.2 (2013).
  89. Shulman, C., & Armstrong, S. (2009, July). Arms control and intelligence explosions. In 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July (pp. 2-4).
  90. McFarland, Matt (14 November 2017). "'Slaughterbots' film shows potential horrors of killer drones". CNNMoney. Archived from the original on 15 September 2021. Retrieved 14 January 2018.
  91. 1 2 Allen, Greg, and Taniel Chan. "Artificial Intelligence and National Security." Report. Harvard Kennedy School, Harvard University. Boston, MA (2017).
  92. "Autonomous Weapons Open Letter: AI & Robotics Researchers". Future of Life Institute. 2016-02-09. Archived from the original on 2022-05-25. Retrieved 2022-02-17.
  93. Wheeler, Brian (30 November 2017). "Terrorists 'certain' to get killer robots". BBC News. Archived from the original on 15 September 2021. Retrieved 24 December 2017.
  94. Metz, Cade (15 March 2018). "Pentagon Wants Silicon Valley's Help on A.I." The New York Times. Archived from the original on 19 March 2018. Retrieved 19 March 2018.

Further reading