Murray Shanahan | |
---|---|
Born | Murray Patrick Shanahan |
Alma mater | Imperial College London (BSc) University of Cambridge (PhD) |
Scientific career | |
Fields | Artificial intelligence Neurodynamics Consciousness [1] |
Institutions | Imperial College London DeepMind |
Thesis | Exploiting dependencies in search and inference mechanisms |
Doctoral advisor | William F. Clocksin [2] |
Website | www |
Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, [3] in the Department of Computing, and a senior scientist at DeepMind. [4] He researches artificial intelligence, robotics, and cognitive science. [1] [5]
Shanahan was educated at Imperial College London [6] and completed his PhD at the University of Cambridge in 1987 [7] supervised by William F. Clocksin. [2]
At Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then (in 2005) the Department of Computing, where he was promoted from Reader to Professor in 2006. [6] Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina . [8] Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test. [9] Shanahan is on the external advisory board for the Cambridge Centre for the Study of Existential Risk. [10] [11] In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines symbolic AI with neural networks, and that exhibits a form of transfer learning. [12] [13] In 2017, citing "the potential (brain drain) on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind. [4] The Atlantic and Wired UK have characterized Shanahan as an influential researcher. [14] [15]
In 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina. [16] The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions. [17]
In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans. [18] The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence. [19] Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable. [20]
Shanahan also authored Solving the Frame Problem (MIT Press, 1997) and co-authored Search, Inference and Dependencies in Artificial Intelligence (Ellis Horwood, 1989). [6]
Shanahan said in 2014 about existential risks from AI that "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." He added that "it's probably a good idea for AI researchers to start thinking (now) about the (existential risk) issues that Stephen Hawking and others have raised." [21] Shanahan said in 2018 that there was no need to panic yet about an AI takeover because multiple conceptual breakthroughs would be needed for artificial general intelligence (AGI), and "it is impossible to know when (AGI) might be achievable". He stated that AGI would come hand-in-hand with true understanding, enabling for example safer automated vehicles and medical diagnosis applications. [22] [23] In 2020, Shanahan characterized AI as lacking the common sense of a human child. [24]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Ben Goertzel is a computer scientist, artificial intelligence researcher, and businessman. He helped popularize the term 'artificial general intelligence'.
OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.
Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Ex Machina is a 2014 science fiction thriller film written and directed by Alex Garland in his directorial debut. A co-production between the United Kingdom and the United States, it stars Domhnall Gleeson, Alicia Vikander, and Oscar Isaac. It follows a programmer who is invited by his CEO to administer the Turing test to an intelligent humanoid robot.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.
The Future of Work and Death is a 2016 documentary by Sean Blacknell and Wayne Walsh about the exponential growth of technology.
Embodiment is central to thought itself, according to the AI guru Murray Shanahan
The list of researchers on the Centre's nine projects features a roll call of AI luminaries: Nick Bostrom, director of Oxford's Future of Humanity Institute, is leading one, as are Imperial College's Murray Shanahan and Berkeley's Stuart Russell.