This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information.(May 2024) |
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant [1] and others, [2] [3] the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." [4] Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation." [5]
Henry Kautz, [6] Francesca Rossi, [7] and Bart Selman [8] also argued for a synthesis. Their arguments attempt to address the two kinds of thinking, as discussed in Daniel Kahneman's book Thinking Fast and Slow. It describes cognition as encompassing two components: System 1 is fast, reflexive, intuitive, and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is used for pattern recognition. System 2 handles planning, deduction, and deliberative thinking. In this view, deep learning best handles the first kind of cognition while symbolic reasoning best handles the second kind. Both are needed for a robust, reliable AI that can learn, reason, and interact with humans to accept advice and answer questions. Such dual-process models with explicit references to the two contrasting systems have been worked on since the 1990s, both in AI and in Cognitive Science, by multiple researchers. [9]
Approaches for integration are diverse. Henry Kautz's taxonomy of neuro-symbolic architectures [10] follows, along with some examples:
These categories are not exhaustive, as they do not consider multi-agent systems. In 2005, Bader and Hitzler presented a more fine-grained categorization that considered, e.g., whether the use of symbols included logic and if it did, whether the logic was propositional or first-order logic. [14] The 2005 categorization and Kautz's taxonomy above are compared and contrasted in a 2021 article. [10] Recently, Sepp Hochreiter argued that Graph Neural Networks "...are the predominant models of neural-symbolic computing" [15] since "[t]hey describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions." [16]
Gary Marcus argues that "...hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient", [17] and that there are
...four cognitive prerequisites for building robust artificial intelligence:
- hybrid architectures that combine large-scale learning with the representational and computational powers of symbol manipulation,
- large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge,
- reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and
- rich cognitive models that work together with those mechanisms and knowledge bases. [18]
This echoes earlier calls for hybrid models as early as the 1990s. [19] [20]
Garcez and Lamb described research in this area as ongoing at least since the 1990s. [21] [22] At that time, the terms symbolic and sub-symbolic AI were popular.
A series of workshops on neuro-symbolic AI has been held annually since 2005 Neuro-Symbolic Artificial Intelligence. [23] In the early 1990s, an initial set of workshops on this topic were organized. [19]
Key research questions remain, [24] such as:
Implementations of neuro-symbolic approaches include:
Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
Connectionism is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' since its beginnings.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objects, taxonomic properties, and peoples' intentions. A device that exhibits commonsense reasoning might be capable of drawing conclusions that are similar to humans' folk psychology and naive physics.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
The Sally–Anne test is a psychological test originally conceived by Daniel Dennett, used in developmental psychology to measure a person's social cognitive ability to attribute false beliefs to others. Based on the earlier ground-breaking study by Wimmer and Perner (1983), the Sally–Anne test was so named by Simon Baron-Cohen, Alan M. Leslie, and Uta Frith (1985) who developed the test further; in 1988, Leslie and Frith repeated the experiment with human actors and found similar results.
Hybrid intelligent system denotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as:
Dov M. Gabbay is an Israeli logician. He is Augustus De Morgan Professor Emeritus of Logic at the Group of Logic, Language and Computation, Department of Computer Science, King's College London.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Ron Sun is a cognitive scientist who has made significant contributions to computational psychology and other areas of cognitive science and artificial intelligence. He is currently professor of cognitive sciences at Rensselaer Polytechnic Institute, and formerly the James C. Dowell Professor of Engineering and Professor of Computer Science at University of Missouri. He received his Ph.D. in 1992 from Brandeis University.
Artur d'Avila Garcez is a researcher in the field of computational logic and neural computation, in particular hybrid systems with application in software verification and information extraction. His contributions include neural-symbolic learning systems and nonclassical models of computation combining robust learning and reasoning. He is a Professor of Computer Science at City, University London.
In the philosophy of artificial intelligence, GOFAI is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented neural network architecture (MANN), which is typically recurrent in its implementation. The model was published in 2016 by Alex Graves et al. of DeepMind.
The following outline is provided as an overview of and topical guide to machine learning:
Pascal Hitzler is a German American computer scientist specializing in Semantic Web and Artificial Intelligence. He is endowed Lloyd T. Smith Creativity in Engineering Chair, one of the Directors of the Institute for Digital Agriculture and Advanced Analytics (ID3A) and Director of the Center for Artificial Intelligence and Data Science (CAIDS) at Kansas State University, and the founding Editor-in-Chief of the Semantic Web journal and the IOS Press book series Studies on the Semantic Web.
Jens Lehmann is a computer scientist, who works with knowledge graphs and artificial intelligence. He is a principal scientist at Amazon, an honorary professor at TU Dresden and a fellow of European Laboratory for Learning and Intelligent Systems. Formerly, he was a full professor at the University of Bonn, Germany and lead scientist for Conversational AI and Knowledge Graphs at Fraunhofer IAIS.