March of the Machines

Last updated
March of the Machines: The Breakthrough in Artificial Intelligence
Cover of "March of the Machines, The Breakthrough in Artificial Intelligence" by Kevin Warwick.jpg
Author Kevin Warwick
Subject Artificial intelligence, Robots, Artificial general intelligence
Publisher University of Illinois Press
Publication date
1997
Pages320
ISBN 978-0252072239

March of the Machines: Why the New Race of Robots Will Rule the World (1997, hardcover), published in paperback as March of the Machines: The Breakthrough in Artificial Intelligence (2004), is a book by Kevin Warwick. It presents an overview of robotics and artificial intelligence (AI), often focusing on anecdotes of Warwick's own work, and then imagines future scenarios. In particular, Warwick finds it likely that such AIs will become smart enough to replace humans, and humans may be unable to stop them.

Contents

Contents

The book has a conversational style, with little technical detail. [1] Warwick proposes that because machines will become more intelligent than humans, machine takeover is all but inevitable. The drive to automate is fueled by economic incentives. Even if machines start out without intentions to take over, those that self-modify in a direction toward a "will to survive" are more likely to resist being turned off. Arms races will likely create ever-increasing pressure for greater autonomy by robotic warfare systems, and this pressure would be hard to curtail. Machines have a number of advantages over human minds, including the ability to expand practically without limit and to spread into space where humans can't reach. "All the signs are that we will rapidly become merely an insignificant historical dot" (p. 301).

Reception

John Durant in the New Statesman cautions against Warwick's apparent anthropomorphism: In one problematic example passage, Warwick opines that the Deep Blue computer had deliberately "let Kasparov win the overall (1996) series, having shown him in the first game who was really the better player". Durant states that "There's rather a lot of this sort of thing in March of the Machines, and it's not clear how seriously it's intended to be taken." Durant also disagreed with Warwick's thesis, stating that present-day computers "are not threats to us, but rather expressions of our power: we use the machines; they don't use us." Durant also wonders why, "If Warwick's thesis about impending world robot-domination is correct", Warwick continues to undertake cybernetic research. [2]

Don Braben begins his review of Warwick's book by stating that "Specialists love to share dire predictions of the future, which stem from limited perspectives." Braben also states that, despite the centrality of intelligence to the thesis, Warwick fails to adequately pin down the slippery concept. [1]

In Human Physiology , Medvedev and Aldasheva dispute Warwick's contention that machines will become superior to humans on the grounds that "machines are man-made human organs", i.e., they extend what humans do. [3] :367 Moreover, if machines were to rebel against humans, humans could make use of other machines to combat the rebels. [3] :367 If AIs were created, humans would program them to align with human goals, and while some AIs might go awry, this would not be so different from the situation of human maniacs. [3] :368 All told, they consider Warwick's predictions of robot rebellion "grossly exaggerated". [3] :369

The blurb for the 1997 edition stated in part "Recent breakthroughs in cybernetics mean that robots already exist with the brain power of insects. Within five years, robots will exist with the brain power of cats. In ten to 50 years, robots will exist that are more intelligent than humans." Revising in 2014, Martin Robbins of Vice news quotes Warwick's predictions of robot abilities as an example of "Extravagant claims" that "have been damaging the reputation of our soon-to-be robot overlords for decades now". [4]

Notes

  1. 1 2 Braben, Don (10 May 1997). "Review : Hasta la vista, babies". New Scientist . Retrieved 24 October 2014.
  2. Durant, John (25 Apr 1997). "March of the Machines: Why the New Race of Robots Will Rule the World". New Statesman. p. 50.
  3. 1 2 3 4 V. I. Medvedev; A. A. Aldasheva (May–Jun 2000). "Reading March of the machines by K. Warwick (in lieu of a review)". Human Physiology. 26 (3): 366–370. doi:10.1007/BF02760201. ISSN   1608-3164. S2CID   46169254.
  4. Robbins, Martin (11 Feb 2014). "We Must End Our Obsession with Robots that Look like Humans". Vice. Retrieved 24 October 2014.

Related Research Articles

Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The Oxford English Dictionary of Oxford University Press defines artificial intelligence as:

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

The Chinese room argument holds that a digital computer executing a program cannot have a "mind," "understanding" or "consciousness," regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Hugo de Garis</span> AI researcher

Hugo de Garis is a retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artificial neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<i>The Age of Spiritual Machines</i> 1999 non-fiction book by Ray Kurzweil

The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999 by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.

Friendly artificial intelligence refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

Orion's Arm is a multi-authored online science fiction world-building project, first established in 2000 by M. Alan Kazlev, Donna Malcolm Hirsekorn, Bernd Helfert and Anders Sandberg and further co-authored by many people since. Anyone can contribute articles, stories, artwork, or music to the website. A large mailing list exists, in which members debate aspects of the world they are creating, discussing additions, modifications, issues arising, and work to be done.

Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.

<span class="mw-page-title-main">Rodney Brooks</span> Australian roboticist

Rodney Allen Brooks is an Australian roboticist, Fellow of the Australian Academy of Science, author, and robotics entrepreneur, most known for popularizing the actionist approach to robotics. He was a Panasonic Professor of Robotics at the Massachusetts Institute of Technology and former director of the MIT Computer Science and Artificial Intelligence Laboratory. He is a founder and former Chief Technical Officer of iRobot and co-Founder, Chairman and Chief Technical Officer of Rethink Robotics and currently is the co-founder and Chief Technical Officer of Robust.AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

Intelligent Environments (IE) are spaces with embedded systems and information and communication technologies creating interactive spaces that bring computation into the physical world and enhance occupants experiences. "Intelligent environments are spaces in which computation is seamlessly used to enhance ordinary activity. One of the driving forces behind the emerging interest in highly interactive environments is to make computers not only genuine user-friendly but also essentially invisible to the user".

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

Artificial intelligence is a recurrent theme in science fiction, whether utopian, emphasising the potential benefits, or dystopian, emphasising the dangers.

<i>Transcendent Man</i> 2009 documentary film by Barry Ptolemy

Transcendent Man is a 2009 documentary film by American filmmaker Barry Ptolemy about inventor, futurist and author Ray Kurzweil and his predictions about the future of technology in his 2005 book, The Singularity is Near. In the film, Ptolemy follows Kurzweil around his world as he discusses his thoughts on the technological singularity, a proposed advancement that will occur sometime in the 21st century when progress in artificial intelligence, genetics, nanotechnology, and robotics will result in the creation of a human-machine civilization.

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Many scholars believe that advances in artificial intelligence will someday lead to a post-scarcity economy where intelligent machines can outperform humans in nearly every domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of lively debate.

<span class="mw-page-title-main">Human Compatible</span> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.