Life 3.0

Last updated
Life 3.0: Being Human in the Age of Artificial Intelligence
Cover of the book Life 3.0.png
Hardcover edition (US)
Author Max Tegmark
CountryUnited States
LanguageEnglish
SubjectArtificial Intelligence
GenreNon-fiction
Publisher Knopf (US)
Allen Lane (UK)
Publication date
August 23, 2017
Media typePrint (hardback)
Pages280
ISBN 978-1-101-94659-6

Life 3.0: Being Human in the Age of Artificial Intelligence [1] is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

Contents

Summary

The book begins by positing a scenario in which AI has exceeded human intelligence and become pervasive in society. Tegmark refers to different stages of human life since its inception: Life 1.0 referring to biological origins, Life 2.0 referring to cultural developments in humanity, and Life 3.0 referring to the technological age of humans. The book focuses on "Life 3.0", and on emerging technology such as artificial general intelligence that may someday, in addition to being able to learn, be able to also redesign its own hardware and internal structure.

The first part of the book looks at the origin of intelligence billions of years ago and goes on to project the future development of intelligence. Tegmark considers short-term effects of the development of advanced technology, such as technological unemployment, AI weapons, and the quest for human-level AGI (Artificial General Intelligence). The book cites examples like Deepmind and OpenAI, self-driving cars, and AI players that can defeat humans in Chess, [2] Jeopardy, [3] and Go. [4]

After reviewing current issues in AI, Tegmark then considers a range of possible futures that feature intelligent machines or humans. The fifth chapter describes a number of potential outcomes that could occur, such as altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse. [5] Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans. Many of the goals of the book align with those of the Future of Life Institute, [6] of which Tegmark is a co-founder.

The remaining chapters explore concepts in physics, goals, consciousness and meaning, and investigate what society can do to help create a desirable future for humanity.

Reception

Professor Max Tegmark, author of Life 3.0. Max Tegmark.jpg
Professor Max Tegmark, author of Life 3.0.

One criticism of the book by Kirkus Reviews is that some of the scenarios or solutions in the book are a stretch or somewhat prophetic: "Tegmark's solutions to inevitable mass unemployment are a stretch." [7] AI researcher Stuart J. Russell, writing in Nature , said: "I am unlikely to disagree strongly with the premise of Life 3.0. Life, Tegmark argues, may or may not spread through the Universe and 'flourish for billions or trillions of years' because of decisions we make now — a possibility both seductive and overwhelming." [8] Writing in Science , Haym Hirsh called it "a highly readable book that complements The Second Machine Age's economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence." [9] The Telegraph called it "One of the very best overviews of the arguments around artificial intelligence". [10] [11] The Christian Science Monitor said "Although it's probably not his intention, much of what Tegmark writes will quietly terrify his readers." [12] Publishers Weekly gave a positive review, but also stated that Tegmark's call for researching how to maintain control over superintelligent machines "sits awkwardly beside his acknowledgment that controlling such godlike entities will be almost impossible." [13] Library Journal called it a "must-read" for technologists, but stated the book was not for the casual reader. [14] The Wall Street Journal called it "lucid and engaging"; however, it cautioned readers that the controversial notion that superintelligence could run amok has more credence than it does few years ago, but is still fiercely opposed by many computer scientists. [15]

Rather than endorse a specific future, the book invites readers to think about what future they would like to see, and to discuss their thoughts on the Future of Life Website. [16] The Wall Street Journal review called this attitude noble but naive, and criticized the referenced Web site for being "chockablock with promo material for the book". [15]

The hardcover edition was on the general New York Times Best Seller List for two weeks, [17] and made on the New York Times business bestseller list in September and October 2017. [18]

Former President Barack Obama included the book in his "best of 2018" list. [19] [20]

Business magnate Elon Musk (who had previously endorsed the thesis that, under some scenarios, advanced AI could jeopardize human survival) recommended Life 3.0 as "worth reading". [21] [22]

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Max Tegmark</span> Swedish-American cosmologist

Max Erik Tegmark is a Swedish-American physicist, cosmologist and machine learning researcher. He is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. He is also a supporter of the effective altruism movement.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer (born 1973)

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings to pursue similar sub-goals, even if their ultimate goals are pretty different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Many scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity economy where intelligent machines can outperform humans in nearly, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Michal Kosinski, D. Scott Phoenix, Hiroshi Ishiguro, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivise said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

References

  1. Tegmark, Max (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf. ISBN   9781101946596. OCLC   973137375.
  2. "IBM100 - Deep Blue". www-03.ibm.com. 2012-03-07. Retrieved 2017-10-20.
  3. Markoff, John (2011-02-16). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times. ISSN   0362-4331 . Retrieved 2017-10-20.
  4. "In Major AI Breakthrough, Google System Secretly Beats Top Player at the Ancient Game of Go". WIRED. Retrieved 2017-10-20.
  5. Harari, Yuval Noah (2017-09-22). "Life 3.0 by Max Tegmark review – we are ignoring the AI apocalypse". The Guardian. ISSN   0261-3077 . Retrieved 2017-10-20.
  6. "Podcast: Life 3.0 - Being Human in the Age of Artificial Intelligence - Future of Life Institute". Future of Life Institute. 2017-08-29. Retrieved 2017-10-20.
  7. LIFE 3.0 by Max Tegmark | Kirkus Reviews.
  8. Russell, Stuart (2017-08-31). "Artificial intelligence: The future is superintelligent". Nature. 548 (7669): 520–521. Bibcode:2017Natur.548..520R. doi: 10.1038/548520a . ISSN   0028-0836.
  9. Hirsh, Haym (2017-08-02). "A physicist explores the future of artificial intelligence". Science Magazine. Vol. 357, no. 6350 (published 2017-08-04). Retrieved 2017-10-19.
  10. Poole, Steven (26 November 2017). "Thinking big, snoozing bigger: the best science books of 2017". The Telegraph. Retrieved 8 December 2017.
  11. Poole, Steven (2017-08-27). "Artificial intelligence: how scared should we be about machines taking over?". The Telegraph. ISSN   0307-1235 . Retrieved 2017-10-20.
  12. "3 science books compelling enough to speak to all readers". Christian Science Monitor. 30 August 2017. Retrieved 11 December 2017.
  13. "Nonfiction Book Review: Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark. Knopf, $28 (384p) ISBN 978-1-101-94659-6". PublishersWeekly.com. 10 July 2017. Retrieved 7 January 2018.
  14. Browning, Natalie (15 September 2017). "Life 3.0: being human in the age of artificial intelligence". Library Journal .
  15. 1 2 Rose, Frank (2017-08-28). "When Machines Run Amok". Wall Street Journal. ISSN   0099-9660 . Retrieved 2017-10-20.
  16. "Superintelligence survey - Future of Life Institute". Future of Life Institute. 15 August 2017. Retrieved 7 January 2018.
  17. "Hardcover Nonfiction Books - Best Sellers - September 24, 2017 - The New York Times". The New York Times. 24 September 2017. Retrieved 10 February 2018.
  18. "Business Books - Best Sellers - September 2017 - The New York Times". The New York Times.
  19. Caron, Christina (28 December 2018). "Barack Obama's Favorite Book of 2018 Was 'Becoming.' Here's What Else He Liked". The New York Times. Retrieved 31 December 2018.
  20. "Barack Obama". www.facebook.com. Retrieved 31 December 2018.
  21. "Elon Musk says 'A.I. Will be the best or worst thing ever for humanity,' recommends a book on the topic". CNBC . 29 August 2017.
  22. Moody, Oliver (30 October 2017). "Why Elon Musk thinks Max Tegmark is the geek who will save the world". The Times of London . Retrieved 27 November 2017.