Possible Minds

Last updated

Possible Minds
Possible Minds.jpg
First edition
Editor John Brockman
CountryUnited States
LanguageEnglish
Subject Artificial intelligence
Futurism
Published2019
Publisher Penguin Press
Media typeHardcover
Pages293
ISBN 9780525557999

Possible Minds: Twenty-five Ways of Looking at AI, edited by John Brockman, is a 2019 collection of essays on the future impact of artificial intelligence.

Contents

Structure

Twenty-five essayists contributed essays related to artificial intelligence (AI) pioneer Norbert Wiener's 1950 book The Human Use of Human Beings , in which Weiner, fearing future machines built from vacuum tubes and capable of sophisticated logic, warned that "The hour is very late, and the choice of good and evil knocks at our door. We must cease to kiss the whip that lashes us." [1] [2] Wiener stated that an AI "which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us". [3] The essayists seek to address the question: What dangers might advanced AI present to humankind? Prominent essayists include Daniel Dennett, Alison Gopnik, Jaan Tallinn, and George Dyson. [4] Brockman interleaves his own intros and anecdotes between the contributors' essays. [5]

Ideas

Multiple essayists state that artificial general intelligence is still two to four decades away. Most of the essayists advice proceeding with caution. Hypothetical dangers discussed include societal fragmentation, loss of human jobs, dominance of multinational corporations with powerful AI, or existential risk if superintelligent machines develop a drive for self-preservation. [1] Computer scientist W. Daniel Hillis states "Humans might be seen as minor annoyances, like ants at a picnic". [2] Some essayists argue that AI has already become an integral part of human culture; geneticist George M. Church suggests that modern human are already "transhumans" when compared with humans in the Stone Age. [4] Many of the essays are influenced by past failures of AI. MIT's Neil Gershenfeld states "Discussions about artificial intelligence have been (manic-depressive): depending on how you count, we're now in the fifth boom-and-bust cycle." Brockman states "over the decades I rode with (the AI pioneers) on waves of enthusiasm, and into valleys of disappointment". [3] Many essayists emphasize the limitations of past and current AI; Church notes that 2011 Jeopardy! champion Watson required 85,000 watts of power, compared to a human brain which uses 20 watts. [5]

Reception

Kirkus Reviews stated readers who want to ponder the future impact of AI "will not find a better introduction than this book." [6] Publishers Weekly called the book "enlightening, entertaining, and exciting reading". [4] Future Perfect ( Vox ) noted the book [lower-alpha 1] "makes for gripping reading, (and the book) can get perspectives from the preeminent voices of AI... but (the book) cannot make those people talk to each other." [3] Booklist stated the book includes "many rich ideas" to "savor and contemplate". [7] In Foreign Affairs , technology journalist Kenneth Cukier called the book "a fascinating map". [2]

Explanatory notes

  1. alongside another AI compendium, Architects of Intelligence

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which ultimately results in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Artificial intelligence is a recurrent theme in science fiction, whether utopian, emphasising the potential benefits, or dystopian, emphasising the dangers.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Gary Marcus</span> American cognitive scientist

Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Martin Ford (author)</span> American futurist and author

Martin Ford is an American futurist and author focusing on artificial intelligence and robotics, and the impact of these technologies on the job market, economy and society.

<span class="mw-page-title-main">Susan Schneider</span> American philosopher and artificial intelligence expert

Susan Lynn Schneider is an American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University where she also holds the William F. Dietrich Distinguished Professorship. Schneider has also held the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, Exploration, and Scientific Innovation at NASA and the Distinguished Scholar Chair at the Library of Congress.

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances its intended objectives. A misaligned AI system may pursue some objectives, but not the intended ones.

<i>Life 3.0</i> 2017 book by Max Tegmark on artificial intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

<i>Artificial Intelligence: A Guide for Thinking Humans</i> 2019 book by Melanie Mitchell

Artificial Intelligence: A Guide for Thinking Humans is a 2019 nonfiction book by Santa Fe Institute professor Melanie Mitchell. The book provides an overview of artificial intelligence (AI) technology, and argues that people tend to overestimate the abilities of artificial intelligence.

<i>The Alignment Problem</i> 2020 non-fiction book by Brian Christian

The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values.

References

  1. 1 2 Lord, Rich (23 February 2019). "'Possible Minds' : Will humans matter in an age of sentient machines?". Pittsburgh Post-Gazette . Retrieved 28 June 2020.
  2. 1 2 3 Kenneth Neil Cukier (9 July 2019). "Ready for Robots? How to Think About the Future of AI". Foreign Affairs . Retrieved 28 June 2020.
  3. 1 2 3 Piper, Kelsey (2 March 2019). "How will AI change our lives? Experts can't agree — and that could be a problem". Vox. Retrieved 28 June 2020.
  4. 1 2 3 "Nonfiction Book Review: Possible Minds: 25 Ways of Looking at AI by Edited by John Brockman. Penguin Press, $28 (320p) ISBN 978-0-525-55799-9". PublishersWeekly.com. February 2019. Retrieved 28 June 2020.
  5. 1 2 Žliobaitė, Indrė (August 2019). "AI minds need to think about energy constraints". Nature Machine Intelligence. 1 (8): 335. doi:10.1038/s42256-019-0083-7. hdl: 10138/318107 . S2CID   201493499.
  6. "POSSIBLE MINDS | Kirkus Reviews". Kirkus Reviews. 19 February 2019. Retrieved 28 June 2020.
  7. Hays, Carl. "Possible Minds: 25 Ways of Looking at AI, edited by John Brockman | Booklist Online". Booklist . No. 1 December 2018 issue. Retrieved 28 June 2020.