Murray Shanahan

Last updated

Murray Shanahan
Born
Murray Patrick Shanahan
Alma mater Imperial College London (BSc)
University of Cambridge (PhD)
Scientific career
Fields Artificial intelligence
Neurodynamics
Consciousness [1]
Institutions Imperial College London DeepMind
Thesis Exploiting dependencies in search and inference mechanisms
Doctoral advisor William F. Clocksin [2]
Website www.doc.ic.ac.uk/~mpsha

Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, [3] in the Department of Computing, and a senior scientist at DeepMind. [4] He researches artificial intelligence, robotics, and cognitive science. [1] [5]

Contents

Education

Shanahan was educated at Imperial College London [6] and completed his PhD at the University of Cambridge in 1987 [7] supervised by William F. Clocksin. [2]

Career and research

At Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then (in 2005) the Department of Computing, where he was promoted from Reader to Professor in 2006. [6] Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina . [8] Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test. [9] Shanahan is on the external advisory board for the Cambridge Centre for the Study of Existential Risk. [10] [11] In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines symbolic AI with neural networks, and that exhibits a form of transfer learning. [12] [13] In 2017, citing "the potential (brain drain) on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind. [4] The Atlantic and Wired UK have characterized Shanahan as an influential researcher. [14] [15]

Books

In 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina. [16] The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions. [17]

In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans. [18] The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence. [19] Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable. [20]

Shanahan also authored Solving the Frame Problem (MIT Press, 1997) and co-authored Search, Inference and Dependencies in Artificial Intelligence (Ellis Horwood, 1989). [6]

Views

As of the 2020s, Shanahan characterizes AI as lacking the common sense of a human child. [21] He endorses research into artificial general intelligence (AGI) to fix this problem, stating that AI systems deployed in areas such as medical diagnosis and automated vehicles should have such abilities to be safer and more effective. Shanahan states that there is no need to panic about an AI takeover because multiple conceptual breakthroughs will be needed for AGI, and that "it is impossible to know when (AGI) might be achievable". [22] He later stated an "unknown number of conceptual breakthroughs are needed" for the development of AGI. [23] Shanahan states "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." In 2014 Shanahan argued that "it's probably a good idea for AI researchers to start thinking (now) about the (existential risk) issues that Stephen Hawking and others have raised." [24]

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Ability of systems to perceive, synthesize, and infer information

Artificial intelligence (AI) is intelligence demonstrated by computers, as opposed to human or animal intelligence. "Intelligence" encompasses the ability to learn and to reason, to generalize, and to infer meaning. AI applications include advanced web search engines, recommendation systems, understanding human speech, self-driving cars, generative or creative tools, automated decision-making, and competing at the highest level in strategic game systems.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a type of hypothetical intelligent agent. The AGI concept is that it can learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

"Why The Future Doesn't Need Us" is an article written by Bill Joy in the April 2000 issue of Wired magazine. In the article, he argues that "Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species." Joy warns:

The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.

<span class="mw-page-title-main">Outline of artificial intelligence</span> Overview of and topical guide to artificial intelligence

The following outline is provided as an overview of and topical guide to artificial intelligence:

David A. McAllester is an American computer scientist who is Professor and former chief academic officer at the Toyota Technological Institute at Chicago. He received his B.S., M.S. and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979 and 1987 respectively. His PhD was supervised by Gerald Sussman. He was on the faculty of Cornell University for the academic year 1987-1988 and on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the American Association of Artificial Intelligence since 1997. He has written over 100 refereed publications.

<span class="mw-page-title-main">Ben Goertzel</span> Artificial intelligence researcher

Ben Goertzel is a cognitive scientist, artificial intelligence researcher, CEO and founder of SingularityNET, leader of the OpenCog Foundation, and the AGI Society, and chair of Humanity+. He helped popularize the term 'artificial general intelligence'.

<span class="mw-page-title-main">Gary Marcus</span> American cognitive scientist

Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).

<span class="mw-page-title-main">OpenCog</span> Project for an open source artificial intelligence framework

OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.

<i>Ex Machina</i> (film) 2014 film by Alex Garland

Ex Machina is a 2014 science fiction psychological thriller film written and directed by Alex Garland in his directorial debut. A co-production between the United Kingdom and the United States, it stars Domhnall Gleeson, Alicia Vikander, and Oscar Isaac. It follows a programmer who is invited by his CEO to administer the Turing test to an intelligent humanoid robot.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.

AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

<i>The Future of Work and Death</i> 2016 British film

The Future of Work and Death is a 2016 documentary by Sean Blacknell and Wayne Walsh about the growth of exponential technology.

References

  1. 1 2 Murray Shanahan publications indexed by Google Scholar OOjs UI icon edit-ltr-progressive.svg
  2. 1 2 Murray Shanahan at the Mathematics Genealogy Project OOjs UI icon edit-ltr-progressive.svg
  3. "How to make a digital human brain". Fox News . 13 June 2013. Retrieved 8 March 2016.
  4. 1 2 Sample, Ian (1 November 2017). "'We can't compete': why universities are losing their best AI scientists". The Guardian. Retrieved 7 June 2020.
  5. Murray Shanahan at DBLP Bibliography Server OOjs UI icon edit-ltr-progressive.svg
  6. 1 2 3 "Murray Shanahan". www.doc.ic.ac.uk.
  7. Shanahan, Murray Patrick (1987). Exploiting dependencies in search and inference mechanisms. cam.ac.uk (PhD thesis). University of Cambridge. OCLC   53611159. EThOS   uk.bl.ethos.252643.
  8. "AI: will the machines ever rise up?". The Guardian . 26 June 2015. Retrieved 7 June 2020.
  9. "Inside "Devs," a Dreamy Silicon Valley Quantum Thriller". Wired. March 2020. Retrieved 7 June 2020.
  10. Shead, Sam (25 May 2020). "How Britain's oldest universities are trying to protect humanity from risky A.I." CNBC. Retrieved 7 June 2020.
  11. "Team". Archived from the original on 7 November 2017.
  12. Vincent, James (10 October 2016). "These are three of the biggest problems facing today's AI". The Verge. Retrieved 7 June 2020.
  13. Adee, Sally (2016). "Basic common sense is key to building more intelligent machines". New Scientist. Retrieved 7 June 2020.
  14. Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020. Embodiment is central to thought itself, according to the AI guru Murray Shanahan
  15. Manthorpe, Rowland (12 October 2016). "The UK has a new AI centre – so when robots kill, we know who to blame". Wired UK. Retrieved 7 June 2020. The list of researchers on the Centre's nine projects features a roll call of AI luminaries: Nick Bostrom, director of Oxford's Future of Humanity Institute, is leading one, as are Imperial College's Murray Shanahan and Berkeley's Stuart Russell.
  16. O'Sullivan, Michael (1 May 2015). "Why are we obsessed with robots?". Washington Post. Retrieved 7 June 2020.
  17. Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020.
  18. "Autumn's science books weigh up humanity's future options". New Scientist . 9 September 2015. Retrieved 8 March 2016.
  19. 2015 Library Journal review of The Technological Singularity by Murray Shanahan. "This evenhanded primer on a topic whose significance is becoming increasingly recognized ought, as per its inclusion in this series, to receive wide exposure."
  20. Sidney Perkowitz on The Technological Singularity and Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, LA Review of Books, February 18, 2016
  21. Shanahan, M., Crosby, M., Beyret, B., & Cheke, L. (2020). Artificial intelligence and the common sense of animals. Trends in Cognitive Sciences, 24(11), 862–872. https://doi.org/10.1016/j.tics.2020.09.002
  22. King, Anthony (2018). "Machines won't be taking over yet, says leading robotics expert". The Irish Times. Retrieved 7 June 2020.
  23. "Murray Shanahan: The Future of Artifical [sic] Intelligence - Schrödinger at 75: The Future of Biology". Trinity College Dublin. YouTube. Retrieved 23 September 2022.
  24. Ward, Mark (2 December 2014). "Does rampant AI threaten humanity?". BBC News. Retrieved 7 June 2020.