Richard Evans | |
---|---|
![]() Richard Evans, Oxford, 2016. | |
Alma mater | Imperial College London Cambridge University |
Known for | AI Research The Sims 3 |
Website | www |
Richard Evans (born 23 October 1969) is an artificial intelligence (AI) research scientist at DeepMind. His research focuses on integrating declarative interpretable logic-based systems with neural networks, [1] [2] [3] and on formal models of Kant's Critique of Pure Reason. [4] [5] [6]
Previously, he designed the AI for a number of computer games. He was the co-founder, along with Emily Short, of Little Text People, developing real-time multiplayer interactive fiction. Little Text People was acquired by Linden Lab in January 2012 for an undisclosed sum. [7] At EA/Maxis, he was the AI lead on The Sims 3. [8] He also designed and implemented the AI for Black & White , for which he received a number of awards. [9] [10] [11] [12]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel Prize winner in Physics, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
Emily Short is an interactive fiction (IF) writer. From 2020 to 2023, she was creative director of Failbetter Games, the studio behind Fallen London and its spinoffs.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Richard S. Sutton is a Canadian computer scientist. He is a professor of computing science at the University of Alberta and a research scientist at Keen Technologies. Sutton is considered one of the founders of modern computational reinforcement learning, having several significant contributions to the field, including temporal difference learning and policy gradient methods.
Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.
Google Brain was a deep learning artificial intelligence research team that served as the sole AI branch of Google before being incorporated under the newer umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.
Yoshua Bengio is a Canadian computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute MILA.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
David Silver is a principal research scientist at Google DeepMind and a professor at University College London. He has led research on reinforcement learning with AlphaGo, AlphaZero and co-lead on AlphaStar.
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
Dorien Herremans is a Belgian computer music researcher. Herremans is a tenured associate professor in the Singapore University of Technology and Design, and previously held a joint appointment at the Institute of High Performance Computing, A*STAR. She also works as a certified instructor for the NVIDIA Deep Learning Institute and is director of SUTD Game Lab. Before going to SUTD, she was a recipient of the Marie Sklodowska-Curie Postdoctoral Fellowship at the Centre for Digital Music (C4DM) at Queen Mary University of London, where she worked on the project MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion. She received her Ph.D. in Applied Economics on the topic of Computer Generation and Classification of Music through Operations Research Methods. She graduated as a commercial engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. She also worked as a 'mandaatassistent' at the University of Antwerp, in the domain of operations management, supply chain management and operations research.
A Tsetlin machine is an artificial intelligence algorithm based on propositional logic.
Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems.
Paulo Shakarian is an associate professor at Arizona State University where he leads Lab V2 which is focused on neurosymbolic artificial intelligence. His work on artificial intelligence and security has been featured in Forbes, the New Yorker, Slate, the Economist, Business Insider, TechCrunch, CNN and BBC. He has authored numerous books on artificial intelligence and the intersection of AI and security. He previously served as a military officer, had experience at DARPA, and co-founded a startup.
Alexander Mathis is an Austrian mathematician, computational neuroscientist and software developer. He is currently an assistant professor at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland. His research interest focus on research at the intersection of computational neuroscience and machine learning.
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others, the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."
Yixin Chen is a computer scientist, academic, and author. He is a professor of computer science and engineering at Washington University in St. Louis.