Our Final Invention

Last updated
Our Final Invention
Our Final Invention.jpg
First edition
Author James Barrat
CountryUnited States
LanguageEnglish
Publisher Thomas Dunne Books
Publication date
October 1, 2013
Media typePrint (Hardback)
Pages336
ISBN 978-0-312-62237-4

Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level or super-human artificial intelligence. [1] Those supposed risks include extermination of the human race. [2]

Contents

Summary

James Barrat weaves together explanations of AI concepts, AI history, and interviews with prominent AI researchers including Eliezer Yudkowsky and Ray Kurzweil. The book starts with an account of how an artificial general intelligence could become an artificial super-intelligence through recursive self-improvement. In subsequent chapters, the book covers the history of AI, including an account of the work done by I. J. Good, up to the work and ideas of researchers in the field today.

Throughout the book, Barrat takes a cautionary tone, focusing on the threats artificial super-intelligence poses to human existence. Barrat emphasizes how difficult it would be to control or even to predict the actions of something that may become orders of magnitude more intelligent than the most intelligent humans.

Reception

On 13 December 2013, journalist Matt Miller interviewed Barrat for his podcast, "This... is interesting". The interview and related matters to Barrat's book, Our Final Invention, were then captured in Miller's weekly opinion piece for The Washington Post . [3]

Seth Baum, executive director of the Global Catastrophic Risk Institute and one of the people cited by Barrat in his book, reviewed the book favorably on Scientific American's "invited guest" blog, calling it a welcome counterpoint to the vision articulated by Ray Kurzweil in his book The Singularity is Near . [4]

Gary Marcus questions Barrat's argument "that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system", noting that present-day AI does not have such drives, but Marcus concedes "that the goals of machines could change as they get smarter", and he feels that "Barrat is right to ask" about these important issues. [5]

Our Final Invention was a Huffington Post Definitive Tech Book of 2013. [6]

See also

Related Research Articles

<span class="mw-page-title-main">Ray Kurzweil</span> American author, inventor and futurist (born 1948)

Raymond Kurzweil is an American computer scientist, author, inventor, and futurist. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<i>The Age of Spiritual Machines</i> 1999 non-fiction book by Ray Kurzweil

The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999 by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.

<span class="mw-page-title-main">Mind uploading</span> Hypothetical process of digitally emulating a brain

Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<i>The Singularity Is Near</i> 2005 non-fiction book by Ray Kurzweil

The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil.

<i>The Age of Intelligent Machines</i> 1990 non-fiction book by Ray Kurzweil

The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

<i>Transcendent Man</i> 2009 documentary film by Barry Ptolemy

Transcendent Man is a 2009 documentary film by American filmmaker Barry Ptolemy about inventor, futurist and author Ray Kurzweil and his predictions about the future of technology in his 2005 book, The Singularity is Near. In the film, Ptolemy follows Kurzweil around his world as he discusses his thoughts on the technological singularity, a proposed advancement that will occur sometime in the 21st century when progress in artificial intelligence, genetics, nanotechnology, and robotics will result in the creation of a human-machine civilization.

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

<i>How to Create a Mind</i> 2012 non-fiction book by Ray Kurzweil

How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published in hardcover on November 13, 2012 by Viking Press it became a New York Times Best Seller. It has received attention from The Washington Post, The New York Times and The New Yorker.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.

James Rodman Barrat is an American documentary filmmaker, speaker, and author of the nonfiction book Our Final Invention: Artificial Intelligence and the End of the Human Era.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Many scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity economy where intelligent machines can outperform humans in nearly, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

References

  1. Barrat, James. "Our Final Invention: Artificial Intelligence and the End of the Human Era (Book Review)". New York Journal of Books. Retrieved 30 October 2013.
  2. Scoblete, Greg. "Our Final Invention: How the Human Race Goes and Gets Itself Killed" Archived 2019-02-03 at the Wayback Machine , Real Clear Technology (December 6, 2013).
  3. Artificial intelligence: Our final invention?, Matt Miller, Washington Post opinion
  4. Baum, Seth (October 11, 2013). "Our Final Invention: Is AI the Defining Issue for Humanity?". Scientific American . Retrieved February 2, 2014.
  5. Marcus, Gary (24 Oct 2013). "Why We Should Think About the Threat of Artificial Intelligence". New Yorker. Retrieved 15 July 2014.
  6. "'The Definitive Tech Books Of 2013". December 23, 2013. Retrieved June 11, 2014.