Computational cognition (sometimes referred to as computational cognitive science or computational psychology or cognitive simulation) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology. [1]
There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature. [2] [3] In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology. [4]
The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols. [5] However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to break down the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism. [3]
Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started to believe that symbolic artificial intelligence might never be able to imitate some intricate processes of human cognition like perception or learning. The then perceived impossibility (since refuted [6] ) of implementing emotion in AI, was seen to be a stumbling block on the path to achieving human-like cognition with computers. [7] Researchers began to take a “sub-symbolic” approach to create intelligence without specifically representing that knowledge. This movement led to the emerging discipline of computational modeling, connectionism, and computational intelligence. [5]
As it contributes more to the understanding of human cognition than artificial intelligence, computational cognitive modeling emerged from the need to define various cognition functionalities (like motivation, emotion, or perception) by representing them in computational models of mechanisms and processes. [8] Computational models study complex systems through the use of algorithms of many variables and extensive computational resources to produce computer simulation. [9] Simulation is achieved by adjusting the variables, changing one alone or even combining them together, to observe the effect on the outcomes. The results help experimenters make predictions about what would happen in the real system if those similar changes were to occur. [10]
When computational models attempt to mimic human cognitive functioning, all the details of the function must be known for them to transfer and display properly through the models, allowing researchers to thoroughly understand and test an existing theory because no variables are vague and all variables are modifiable. Consider a model of memory built by Atkinson and Shiffrin in 1968, it showed how rehearsal leads to long-term memory, where the information being rehearsed would be stored. Despite the advancement it made in revealing the function of memory, this model fails to provide answers to crucial questions like: how much information can be rehearsed at a time? How long does it take for information to transfer from rehearsal to long-term memory? Similarly, other computational models raise more questions about cognition than they answer, making their contributions much less significant for the understanding of human cognition than other cognitive approaches. [11] An additional shortcoming of computational modeling is its reported lack of objectivity. [12]
John Anderson in his Adaptive Control of Thought-Rational (ACT-R) model uses the functions of computational models and the findings of cognitive science. The ACT-R model is based on the theory that the brain consists of several modules which perform specialized functions separate of each other. [11] The ACT-R model is classified as a symbolic approach to cognitive science. [13]
Another approach which deals more with the semantic content of cognitive science is connectionism or neural network modeling. Connectionism relies on the idea that the brain consists of simple units or nodes and the behavioral response comes primarily from the layers of connections between the nodes and not from the environmental stimulus itself. [11]
Connectionist network differs from computational modeling specifically because of two functions: neural back-propagation and parallel-processing. Neural back-propagation is a method utilized by connectionist networks to show evidence of learning. After a connectionist network produces a response, the simulated results are compared to real-life situational results. The feedback provided by the backward propagation of errors would be used to improve accuracy for the network's subsequent responses. [14] The second function, parallel-processing, stemmed from the belief that knowledge and perception are not limited to specific modules but rather are distributed throughout the cognitive networks. The present of parallel distributed processing has been shown in psychological demonstrations like the Stroop effect, where the brain seems to be analyzing the perception of color and meaning of language at the same time. [15] However, this theoretical approach has been continually disproved because the two cognitive functions for color-perception and word-forming are operating separately and simultaneously, not parallel of each other. [16]
The field of cognition may have benefitted from the use of connectionist networks, but setting up the neural network models can be quite a tedious task and the results may be less interpretable than the system they are trying to model. Therefore, the results may be used as evidence for a broad theory of cognition without explaining the particular process happening within the cognitive function. Other disadvantages of connectionism lie in the research methods it employs or hypothesis it tests as they have been proven inaccurate or ineffective often, taking connectionist models away from an accurate representation of how the brain functions. These issues cause neural network models to be ineffective on studying higher forms of information-processing, and hinder connectionism from advancing the general understanding of human cognition. [17]
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.
Connectionism is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' since its beginnings.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure. On this view, simple concepts combine in systematic ways to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.
Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations.
Paul Smolensky is Krieger-Eisenhower Professor of Cognitive Science at the Johns Hopkins University and a Senior Principal Researcher at Microsoft Research, Redmond Washington.
James Lloyd "Jay" McClelland, FBA is the Lucie Stern Professor at Stanford University, where he was formerly the chair of the Psychology Department. He is best known for his work on statistical learning and Parallel Distributed Processing, applying connectionist models to explain cognitive phenomena such as spoken word recognition and visual word recognition. McClelland is to a large extent responsible for the large increase in scientific interest in connectionism in the 1980s.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
A neural network, also called a neuronal network, is an interconnected population of neurons. Biological neural networks are studied to understand the organization and functioning of nervous systems.
Neurophilosophy or philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.
David Everett Rumelhart was an American psychologist who made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. He also admired formal linguistic approaches to cognition, and explored the possibility of formulating a formal grammar to capture the structure of stories.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. It was vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others.
Embodied cognitive science is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity; the formation of a common set of general principles of intelligent behavior; and the experimental use of robotic agents in controlled environments.
Ron Sun is a cognitive scientist who made significant contributions to computational psychology and other areas of cognitive science and artificial intelligence. He is currently professor of cognitive sciences at Rensselaer Polytechnic Institute, and formerly the James C. Dowell Professor of Engineering and Professor of Computer Science at University of Missouri. He received his Ph.D. in 1992 from Brandeis University.
Harmonic grammar is a linguistic model proposed by Geraldine Legendre, Yoshiro Miyata, and Paul Smolensky in 1990. It is a connectionist approach to modeling linguistic well-formedness. During the late 2000s and early 2010s, the term 'harmonic grammar' has been used to refer more generally to models of language that use weighted constraints, including ones that are not explicitly connectionist – see e.g. Pater (2009) and Potts et al. (2010).
The LIDA cognitive architecture is an integrated artificial cognitive system that attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work.
Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.
Robert M. French is a research director at the French National Centre for Scientific Research. He is currently at the University of Burgundy in Dijon. He holds a Ph.D. from the University of Michigan, where he worked with Douglas Hofstadter on the Tabletop computational cognitive model. He specializes in cognitive science and has made an extensive study of the process of analogy-making.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)