![]() Hardcover edition | |
Author | Stuart J. Russell |
---|---|
Language | English |
Subject | AI control problem |
Genre | Non-fiction |
Publisher | Viking |
Publication date | October 8, 2019 |
Publication place | United States |
Pages | 352 |
ISBN | 978-0-525-55861-3 |
OCLC | 1083694322 |
Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.
Russell begins by asserting that the standard model of AI research, in which the primary definition of success is getting better and better at achieving rigid human-specified goals, is dangerously misguided. Such goals may not reflect what human designers intend, such as by failing to take into account any human values not included in the goals. If an AI developed according to the standard model were to become superintelligent, it would likely not fully reflect human values and could be catastrophic to humanity. Russell asserts that precisely because the timeline for developing human-level or superintelligent AI is highly uncertain, safety research should be begun as soon as possible, as it is also highly uncertain how long it would take to complete such research.
Russell argues that continuing progress in AI capability is inevitable because of economic pressures. Such pressures can already be seen in the development of existing AI technologies such as self-driving cars and personal assistant software. Moreover, human-level AI could be worth many trillions of dollars. Russell then examines the current debate surrounding AI risk. He offers refutations to a number of common arguments dismissing AI risk and attributes much of their persistence to tribalism—AI researchers may see AI risk concerns as an "attack" on their field. Russell reiterates that there are legitimate reasons to take AI risk concerns seriously and that economic pressures make continued innovation in AI inevitable.
Russell then proposes an approach to developing provably beneficial machines that focus on deference to humans. Unlike in the standard model of AI, where the objective is rigid and certain, this approach would have the AI's true objective remain uncertain, with the AI only approaching certainty about it as it gains more information about humans and the world. This uncertainty would, ideally, prevent catastrophic misunderstandings of human preferences and encourage cooperation and communication with humans. Russell concludes by calling for tighter governance of AI research and development as well as cultural introspection about the appropriate amount of autonomy to retain in an AI-dominated world.
Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for human developers. The principles are as follows: [1] : 173
1. The machine's only objective is to maximize the realization of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behavior.
The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." [1] : 173 Similarly, "behavior" includes any choice between options, [1] : 177 and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference. [1] : 201
Russell explores inverse reinforcement learning, in which a machine infers a reward function from observed behavior, as a possible basis for a mechanism for learning human preferences. [1] : 191–193
Several reviewers agreed with the book's arguments. Ian Sample in The Guardian called it "convincing" and "the most important book on AI this year". [2] Richard Waters of the Financial Times praised the book's "bracing intellectual rigour". [3] Kirkus Reviews endorsed it as "a strong case for planning for the day when machines can outsmart us". [4]
The same reviewers characterized the book as "wry and witty", [2] or "accessible" [4] due to its "laconic style and dry humour". [3] Matthew Hutson of the Wall Street Journal said "Mr. Russell's exciting book goes deep while sparkling with dry witticisms". [5] A Library Journal reviewer called it "The right guide at the right time". [6]
James McConnachie of The Times wrote "This is not quite the popular book that AI urgently needs. Its technical parts are too difficult, and its philosophical ones too easy. But it is fascinating and significant." [7]
By contrast, Human Compatible was criticized in its Nature review by David Leslie, an Ethics Fellow at the Alan Turing Institute; and similarly in a New York Times opinion essay by Melanie Mitchell. One point of contention was whether superintelligence is possible. Leslie states Russell "fails to convince that we will ever see the arrival of a 'second intelligent species'", [8] and Mitchell doubts a machine could ever "surpass the generality and flexibility of human intelligence" without losing "the speed, precision, and programmability of a computer". [9] A second disagreement was whether intelligent machines would naturally tend to adopt so-called "common sense" moral values. In Russell's thought experiment about a geoengineering robot that "asphyxiates humanity to deacidify the oceans", Leslie "struggles to identify any intelligence". Similarly, Mitchell believes an intelligent robot would naturally tend to be "tempered by the common sense, values and social judgment without which general intelligence cannot exist". [10] [11]
The book was longlisted for the 2019 Financial Times/McKinsey Award. [12]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by an artificial superintelligence (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The following outline is provided as an overview of and topical guide to artificial intelligence:
In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.
The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.
Artificial Intelligence: A Guide for Thinking Humans is a 2019 nonfiction book by Santa Fe Institute professor Melanie Mitchell. The book provides an overview of artificial intelligence (AI) technology, and argues that people tend to overestimate the abilities of artificial intelligence.