Richard Evans (AI researcher)

Last updated

Richard Evans
Richard-evans.jpg
Richard Evans, Oxford, 2016.
Alma mater Imperial College London Cambridge University
Known forAI Research

Black & White

The Sims 3
Website www.imperial.ac.uk/people/r.evans14

Richard Evans (born 23 October 1969) is an artificial intelligence (AI) research scientist at DeepMind. His research focuses on integrating declarative interpretable logic-based systems with neural networks, [1] [2] [3] and on formal models of Kant's Critique of Pure Reason. [4] [5] [6]

Previously, he designed the AI for a number of computer games. He was the co-founder, along with Emily Short, of Little Text People, developing real-time multiplayer interactive fiction. Little Text People was acquired by Linden Lab in January 2012 for an undisclosed sum. [7] At EA/Maxis, he was the AI lead on The Sims 3. [8] He also designed and implemented the AI for Black & White , for which he received a number of awards. [9] [10] [11] [12]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Natural language processing (NLP) is an interdisciplinary subfield of computer science and artificial intelligence. It is primarily concerned with providing computers the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches of machine learning and deep learning.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

<span class="mw-page-title-main">Emily Short</span> Interactive fiction writer

Emily Short is an interactive fiction (IF) writer. From 2020 to 2023, she was creative director of Failbetter Games, the studio behind Fallen London and its spinoffs.

<span class="mw-page-title-main">Transfer learning</span> Machine learning technique

Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Richard S. Sutton</span> Canadian computer scientist

Richard S. Sutton is a Canadian computer scientist. He is a professor of computing science at the University of Alberta and a research scientist at Keen Technologies. Sutton is considered one of the founders of modern computational reinforcement learning, having several significant contributions to the field, including temporal difference learning and policy gradient methods.

<span class="mw-page-title-main">Andrew Ng</span> American artificial intelligence researcher

Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

David Silver is a principal research scientist at Google DeepMind and a professor at University College London. He has led research on reinforcement learning with AlphaGo, AlphaZero and co-lead on AlphaStar.

An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

The following outline is provided as an overview of and topical guide to machine learning:

Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

<span class="mw-page-title-main">Tsetlin machine</span> Artificial intelligence algorithm

A Tsetlin machine is an artificial intelligence algorithm based on propositional logic.

Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems.

<span class="mw-page-title-main">Alexander Mathis</span> Austrian computational neuroscientist

Alexander Mathis is an Austrian mathematician, computational neuroscientist and software developer. He is currently an assistant professor at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland. His research interest focus on research at the intersection of computational neuroscience and machine learning.

Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others, the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus, argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."

References

  1. Learning Explanatory Rules from Noisy Data
  2. The deepest problem with deep learning
  3. Can Neural Networks Understand Logical Entailment
  4. Evans, R.; Sergot, M.; Stephenson, A. (2020). "Formalizing Kant's Rules". Journal of Philosophical Logic. 49 (4): 613–680. doi:10.1007/s10992-019-09531-x.
  5. High-level Perception and Program Synthesis, FLoC, Oxford, 2018
  6. Evans, Richard; Hernandez-Orallo, Jose; Welbl, Johannes; Kohli, Pushmeet; Sergot, Marek (2019). "Making sense of sensory input". arXiv: 1910.02227 [cs.AI].
  7. Little Text People acquired by Linden Lab
  8. Sims 3 won Editor's Pick for Best AI in a AAA Game 2009
  9. Archive/2nd Annual Game Developer Choice Awards from the Game Developer Choice Awards website
  10. AAAI.Org AAAI.Org
  11. Blurb on Evans Archived 2007-06-11 at the Wayback Machine from Invited Speakers list from AIIDE website
  12. AIGameDev article