Editor | John Brockman |
---|---|
Language | English |
Subject | Artificial intelligence Futurism |
Published | 2019 |
Publisher | Penguin Press |
Publication place | United States |
Media type | Hardcover |
Pages | 293 |
ISBN | 9780525557999 |
Possible Minds: Twenty-five Ways of Looking at AI, edited by John Brockman, is a 2019 collection of essays on the future impact of artificial intelligence.
Twenty-five essayists contributed essays related to artificial intelligence (AI) pioneer Norbert Wiener's 1950 book The Human Use of Human Beings , in which Weiner, fearing future machines built from vacuum tubes and capable of sophisticated logic, warned that "The hour is very late, and the choice of good and evil knocks at our door. We must cease to kiss the whip that lashes us." [1] [2] Wiener stated that an AI "which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us". [3] The essayists seek to address the question: What dangers might advanced AI present to humankind? Prominent essayists include Daniel Dennett, Alison Gopnik, Jaan Tallinn, and George Dyson. [4] Brockman interleaves his own intros and anecdotes between the contributors' essays. [5]
Multiple essayists state that artificial general intelligence is still two to four decades away. Most of the essayists advice proceeding with caution. Hypothetical dangers discussed include societal fragmentation, loss of human jobs, dominance of multinational corporations with powerful AI, or existential risk if superintelligent machines develop a drive for self-preservation. [1] Computer scientist W. Daniel Hillis states "Humans might be seen as minor annoyances, like ants at a picnic". [2] Some essayists argue that AI has already become an integral part of human culture; geneticist George M. Church suggests that modern human are already "transhumans" when compared with humans in the Stone Age. [4] Many of the essays are influenced by past failures of AI. MIT's Neil Gershenfeld states "Discussions about artificial intelligence have been (manic-depressive): depending on how you count, we're now in the fifth boom-and-bust cycle." Brockman states "over the decades I rode with (the AI pioneers) on waves of enthusiasm, and into valleys of disappointment". [3] Many essayists emphasize the limitations of past and current AI; Church notes that 2011 Jeopardy! champion Watson required 85,000 watts of power, compared to a human brain which uses 20 watts. [5]
Kirkus Reviews stated readers who want to ponder the future impact of AI "will not find a better introduction than this book." [6] Publishers Weekly called the book "enlightening, entertaining, and exciting reading". [4] Future Perfect ( Vox ) noted the book [a] "makes for gripping reading, (and the book) can get perspectives from the preeminent voices of AI... but (the book) cannot make those people talk to each other." [3] Booklist stated the book includes "many rich ideas" to "savor and contemplate". [7] In Foreign Affairs , technology journalist Kenneth Cukier called the book "a fascinating map". [2]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
John Brockman is an American literary agent and author specializing in scientific literature. He established the Edge Foundation, an organization that brings together leading edge thinkers across a broad range of scientific and technical fields.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
Artificial intelligence is a recurrent theme in science fiction, whether utopian, emphasising the potential benefits, or dystopian, emphasising the dangers.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Martin Ford is an American futurist and author focusing on artificial intelligence and robotics, and the impact of these technologies on the job market, economy and society.
Susan Lynn Schneider is an American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University where she also holds the William F. Dietrich Distinguished Professorship. Schneider has also held the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, Exploration, and Scientific Innovation at NASA and the Distinguished Scholar Chair at the Library of Congress.
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.
Artificial Intelligence: A Guide for Thinking Humans is a 2019 nonfiction book by Santa Fe Institute professor Melanie Mitchell. The book provides an overview of artificial intelligence (AI) technology, and argues that people tend to overestimate the abilities of artificial intelligence.
The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values.