Murray Shanahan

Last updated

Murray Shanahan
Born
Murray Patrick Shanahan
Alma mater Imperial College London (BSc)
University of Cambridge (PhD)
Scientific career
Fields Artificial intelligence
Neurodynamics
Consciousness [1]
Institutions Imperial College London DeepMind
Thesis Exploiting dependencies in search and inference mechanisms
Doctoral advisor William F. Clocksin [2]
Website www.doc.ic.ac.uk/~mpsha

Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, [3] in the Department of Computing, and a senior scientist at DeepMind. [4] He researches artificial intelligence, robotics, and cognitive science. [1] [5]

Contents

Education

Shanahan was educated at Imperial College London [6] and completed his PhD at the University of Cambridge in 1987 [7] supervised by William F. Clocksin. [2]

Career and research

At Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then (in 2005) the Department of Computing, where he was promoted from Reader to Professor in 2006. [6] Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina . [8] Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test. [9] Shanahan is on the external advisory board for the Cambridge Centre for the Study of Existential Risk. [10] [11] In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines symbolic AI with neural networks, and that exhibits a form of transfer learning. [12] [13] In 2017, citing "the potential (brain drain) on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind. [4] The Atlantic and Wired UK have characterized Shanahan as an influential researcher. [14] [15]

Books

In 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina. [16] The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions. [17]

In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans. [18] The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence. [19] Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable. [20]

Shanahan also authored Solving the Frame Problem (MIT Press, 1997) and co-authored Search, Inference and Dependencies in Artificial Intelligence (Ellis Horwood, 1989). [6]

Views

Shanahan said in 2014 about existential risks from AI that "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." He added that "it's probably a good idea for AI researchers to start thinking (now) about the (existential risk) issues that Stephen Hawking and others have raised." [21] Shanahan said in 2018 that there was no need to panic yet about an AI takeover because multiple conceptual breakthroughs would be needed for artificial general intelligence (AGI), and "it is impossible to know when (AGI) might be achievable". He stated that AGI would come hand-in-hand with true understanding, enabling for example safer automated vehicles and medical diagnosis applications. [22] [23] In 2020, Shanahan characterized AI as lacking the common sense of a human child. [24]

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Ben Goertzel</span> American computer scientist and AI researcher

Ben Goertzel is a computer scientist, artificial intelligence researcher, and businessman. He helped popularize the term 'artificial general intelligence'.

<span class="mw-page-title-main">OpenCog</span> Project for an open source artificial intelligence framework

OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.

<span class="mw-page-title-main">Roman Yampolskiy</span> Latvian computer scientist (born 1979)

Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Ex Machina</i> (film) 2014 film by Alex Garland

Ex Machina is a 2014 science fiction thriller film written and directed by Alex Garland in his directorial debut. A co-production between the United Kingdom and the United States, it stars Domhnall Gleeson, Alicia Vikander, and Oscar Isaac. It follows a programmer who is invited by his CEO to administer the Turing test to an intelligent humanoid robot.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.

Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.

<span class="mw-page-title-main">AI takeover in popular culture</span>

AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

<i>The Future of Work and Death</i> 2016 British film

The Future of Work and Death is a 2016 documentary by Sean Blacknell and Wayne Walsh about the exponential growth of technology.

References

  1. 1 2 Murray Shanahan publications indexed by Google Scholar OOjs UI icon edit-ltr-progressive.svg
  2. 1 2 Murray Shanahan at the Mathematics Genealogy Project OOjs UI icon edit-ltr-progressive.svg
  3. "How to make a digital human brain". Fox News . 13 June 2013. Retrieved 8 March 2016.
  4. 1 2 Sample, Ian (1 November 2017). "'We can't compete': why universities are losing their best AI scientists". The Guardian. Retrieved 7 June 2020.
  5. Murray Shanahan at DBLP Bibliography Server OOjs UI icon edit-ltr-progressive.svg
  6. 1 2 3 "Murray Shanahan". www.doc.ic.ac.uk.
  7. Shanahan, Murray Patrick (1987). Exploiting dependencies in search and inference mechanisms. cam.ac.uk (PhD thesis). University of Cambridge. OCLC   53611159. EThOS   uk.bl.ethos.252643.
  8. "AI: will the machines ever rise up?". The Guardian . 26 June 2015. Retrieved 7 June 2020.
  9. "Inside "Devs," a Dreamy Silicon Valley Quantum Thriller". Wired. March 2020. Retrieved 7 June 2020.
  10. Shead, Sam (25 May 2020). "How Britain's oldest universities are trying to protect humanity from risky A.I." CNBC. Retrieved 7 June 2020.
  11. "Team". Archived from the original on 7 November 2017.
  12. Vincent, James (10 October 2016). "These are three of the biggest problems facing today's AI". The Verge. Retrieved 7 June 2020.
  13. Adee, Sally (2016). "Basic common sense is key to building more intelligent machines". New Scientist. Retrieved 7 June 2020.
  14. Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020. Embodiment is central to thought itself, according to the AI guru Murray Shanahan
  15. Manthorpe, Rowland (12 October 2016). "The UK has a new AI centre – so when robots kill, we know who to blame". Wired UK. Retrieved 7 June 2020. The list of researchers on the Centre's nine projects features a roll call of AI luminaries: Nick Bostrom, director of Oxford's Future of Humanity Institute, is leading one, as are Imperial College's Murray Shanahan and Berkeley's Stuart Russell.
  16. O'Sullivan, Michael (1 May 2015). "Why are we obsessed with robots?". Washington Post. Retrieved 7 June 2020.
  17. Ball, Philip (25 July 2017). "Why Philosophers Are Obsessed With Brains in Jars". The Atlantic. Retrieved 7 June 2020.
  18. "Autumn's science books weigh up humanity's future options". New Scientist . 9 September 2015. Retrieved 8 March 2016.
  19. 2015 Library Journal review of The Technological Singularity by Murray Shanahan. "This evenhanded primer on a topic whose significance is becoming increasingly recognized ought, as per its inclusion in this series, to receive wide exposure."
  20. Sidney Perkowitz on The Technological Singularity and Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, LA Review of Books, February 18, 2016
  21. Ward, Mark (2 December 2014). "Does rampant AI threaten humanity?". BBC News. Retrieved 7 June 2020.
  22. King, Anthony (2018). "Machines won't be taking over yet, says leading robotics expert". The Irish Times. Retrieved 7 June 2020.
  23. "Murray Shanahan: The Future of Artifical [sic] Intelligence - Schrödinger at 75: The Future of Biology". Trinity College Dublin. YouTube. Retrieved 23 September 2022.
  24. Shanahan, Murray; Crosby, Matthew; Beyret, Benjamin; Cheke, Lucy (1 November 2020). "Artificial Intelligence and the Common Sense of Animals". Trends in Cognitive Sciences. 24 (11): 862–872. doi:10.1016/j.tics.2020.09.002. ISSN   1364-6613. PMID   33041199.