Artificial Intelligence: A Guide for Thinking Humans

Last updated

Artificial Intelligence: A Guide for Thinking Humans
Artificial Intelligence- A Guide for Thinking Humans.jpg
First edition (US)
AuthorMelanie Mitchell
Country United States
Language English
Genre Popular science
Publisher Farrar, Straus and Giroux/Macmillan (US)
Pelican Books (UK)
Publication date
October 2019
Pages448 pp (hardcover 1st edition)
ISBN 9780241404829 (hardcover 1st edition)

Artificial Intelligence: A Guide for Thinking Humans is a 2019 nonfiction book by Santa Fe Institute professor Melanie Mitchell. [1] The book provides an overview of artificial intelligence (AI) technology, and argues that people tend to overestimate the abilities of artificial intelligence. [2] [3]

Contents

Overview

Mitchell describes the fears her mentor, cognitive scientist and AI pioneer Douglas Hofstadter, has expressed that advances of artificial intelligence could turn human beings into "relics". [4] Mitchell offers examples of AI systems like Watson that are trained to master specific tasks, and points out that such computers lack the general intelligence that humans have. [5] Mitchell argues that achieving superintelligence would require that machines acquire commonsense reasoning abilities that are nowhere in sight: "Today's AI is far from general intelligence, and I don’t believe that machine 'superintelligence' is anywhere on the horizon." Mitchell addresses 13 pages to "Trustworthy and Ethical AI". [1] [6] Mitchell states artificial intelligence is vulnerable to errors, to racial bias, and to malicious hacking such as surprisingly easy adversarial attacks: "If there are statistical associations in the training data... the machine will happily learn those instead of what you wanted it to learn." Mitchell also includes lighthearted content, such as documenting the Star Trek computer's status as an aspirational lodestar within the AI community. [4] [5]

Reception

A review in Library Journal praised the book's historical overview as "a worthy and compelling narrative in itself". [7] Kirkus Reviews judged that despite a minority of the book being "too abstruse", most of the book was "surprisingly lucid". [6] Publishers Weekly called the book "accessible" and "worthy", and judged the book should "assuage lay readers' fears about AI". [5] The New Yorker characterized it as reassuring, and also as "accessible" despite its technical nature.

In the Chicago Tribune , author John Warner states Mitchell is a "clear, cogent and interesting" writer who "knows what she's talking about". Warner notes "Mitchell is not particularly worried" about AI triggering a technological singularity, and that he trusts her expertise: "The book makes a case that we're much farther from self-driving cars than the popular hype would have us believe... (the book) has also enhanced my appreciation for the complexity and ineffability of human cognition." Mitchell finds the book empowering, stating that the things we may see as human flaws help to make us intelligent in ways computers can't match, and that Mitchell's insights help to validate Warner's own handpicked book recommendations despite the existence of automated Amazon recommendations. [8] In Skeptic , computer programmer Peter Kassan compares the book favorably with "histrionic" works such as Life 3.0 , You Look Like a Thing and I Love You , and The Age of Spiritual Machines . Kassan calls the book "the most intelligent book on the subject" and praises Mitchell for being "measured, cautious, and often skeptical", unlike "most active practitioners in the field". [1]

In The Christian Science Monitor , author Barbara Spindel states the "lucid", "clear-eyed" and "fascinating" book does a good job documenting that artificial general intelligence is nowhere near, and believes that "many readers will be reassured to know that we will not soon have to bow down to our computer overlords." Spindel expresses surprise that Mitchell goes on to express her personal passion toward trying to solve the puzzle of commonsense reasoning and presumably enable the development of superintelligent machines: "While computers won't surpass humans anytime soon, not everyone will be convinced that the effort to help them along is a good idea". [4]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Marvin Minsky</span> American cognitive scientist (1927–2016)

Marvin Lee Minsky was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology's AI laboratory and wrote several texts concerning AI and philosophy.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which ultimately results in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objects, taxonomic properties, and peoples' intentions. A device that exhibits commonsense reasoning might be capable of drawing conclusions that are similar to humans' folk psychology and naive physics.

Melanie Mitchell is an American scientist. She is the Davis Professor of Complexity at the Santa Fe Institute. Her major work has been in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata, and her publications in those fields are frequently cited.

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<i>The Master Algorithm</i> Book by Pedro Domingos

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World is a book by Pedro Domingos released in 2015. Domingos wrote the book in order to generate interest from people outside the field.

<i>Life 3.0</i> 2017 book by Max Tegmark on artificial intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

<span class="mw-page-title-main">Novacene</span> 2019 book by James Lovelock

Novacene: The Coming Age of Hyperintelligence is a 2019 non-fiction book by scientist and environmentalist James Lovelock. It has been published by Penguin Books/Allen Lane in the UK, and republished by the MIT Press. The book was co-authored by journalist Bryan Appleyard. It predicts that a benevolent eco-friendly artificial superintelligence will someday become the dominant lifeform on the planet and argues humanity is on the brink of a new era: the Novacene.

<i>The Alignment Problem</i> 2020 non-fiction book by Brian Christian

The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values.

References

  1. 1 2 3 Kassan, Peter (December 31, 2019). "Ten Years Away…and Always Will Be (a Review of Artificial Intelligence: A Guide for Thinking Humans)". Skeptic (magazine) . Vol. 25, no. 1. Retrieved May 22, 2020.
  2. "Briefly Noted Book Reviews". The New Yorker. November 2019. Retrieved May 22, 2020. Mitchell emphasizes the limitations of even advanced machines... 'We humans tend to overestimate AI advances and underestimate the complexity of our own intelligence.'
  3. "Artificial Intelligence | Melanie Mitchell | Macmillan". US Macmillan . (Mitchell) explores the profound disconnect between the hype and the actual achievements in AI
  4. 1 2 3 Spindel, Barbara (October 25, 2019). "Fears about robot overlords are (perhaps) premature". The Christian Science Monitor . Retrieved May 22, 2020.
  5. 1 2 3 "Nonfiction Book Review: Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell. Farrar, Straus and Giroux, $28 (352p) ISBN 978-0-374-25783-5". PublishersWeekly.com. 2019. Retrieved May 22, 2020.
  6. 1 2 "ARTIFICIAL INTELLIGENCE | Kirkus Reviews". Kirkus Reviews. 2019. Retrieved May 22, 2020.
  7. Hahn, Jim (2019). "Artificial Intelligence: A Guide for Thinking Humans". Library Journal. Retrieved May 22, 2020.
  8. Warner, John (2019). "If you're worried artificial intelligence is coming for you, read Melanie Mitchell's new book". Chicago Tribune. Retrieved May 22, 2020.