LIDA (cognitive architecture)

Last updated

The LIDA (Learning Intelligent Distribution Agent) cognitive architecture is an integrated artificial cognitive system that attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work.

Contents

Two hypotheses underlie the LIDA architecture and its corresponding conceptual model: 1) Much of human cognition functions by means of frequently iterated (~10 Hz) interactions, called cognitive cycles, between conscious contents, the various memory systems and action selection. 2) These cognitive cycles, serve as the "atoms" of cognition of which higher-level cognitive processes are composed.

Overview

Though it is neither symbolic nor strictly connectionist, LIDA is a hybrid architecture in that it employs a variety of computational mechanisms, chosen for their psychological plausibility. The LIDA cognitive cycle is composed of modules and processes employing these mechanisms.

Computational mechanisms

The LIDA architecture employs several modules that are designed using computational mechanisms drawn from the "new AI". These include variants of the Copycat Architecture, [1] [2] sparse distributed memory, [3] [4] the schema mechanism, [5] [6] the Behavior Net, [7] [8] and the subsumption architecture. [9]

Psychological and neurobiological underpinnings

As a comprehensive, conceptual and computational cognitive architecture the LIDA architecture is intended to model a large portion of human cognition. [10] [11] Comprising a broad array of cognitive modules and processes, the LIDA architecture attempts to implement and flesh out a number of psychological and neuropsychological theories including Global Workspace Theory, [12] situated cognition, [13] perceptual symbol systems, [14] working memory, [15] memory by affordances, [16] long-term working memory, [17] and the H-CogAff architecture. [18]

LIDA's cognitive cycle

The LIDA cognitive cycle can be subdivided into three phases: the understanding phase, the attention (consciousness) phase, and the action selection and learning phase. Beginning the understanding phase, incoming stimuli activate low-level feature detectors in sensory memory. The output engages perceptual associative memory where higher-level feature detectors feed in to more abstract entities such as objects, categories, actions, events, etc. The resulting percept moves to the Workspace where it cues both Transient Episodic Memory and Declarative Memory producing local associations. These local associations are combined with the percept to generate a current situational model which is the agent's understanding of what is going on right now. The attention phase begins with the forming of coalitions of the most salient portions of the current situational model, which then compete for attention, that is a place in the current conscious contents. These conscious contents are then broadcast globally, initiating the learning and action selection phase. New entities and associations, and the reinforcement of old ones, occur as the conscious broadcast reaches the various forms of memory, perceptual, episodic and procedural. In parallel with all this learning, and using the conscious contents, possible action schemes are instantiated from Procedural Memory and sent to Action Selection, where they compete to be the behavior selected for this cognitive cycle. The selected behavior triggers sensory-motor memory to produce a suitable algorithm for its execution, which completes the cognitive cycle.

History

Virtual Mattie (V-Mattie) is a software agent [19] that gathers information from seminar organizers, composes announcements of next week's seminars, and mails them each week to a list that it keeps updated, all without the supervision of a human. [20] V-Mattie employed many of the computational mechanisms mentioned above.

Baars' Global Workspace Theory (GWT) inspired the transformation of V-Mattie into Conscious Mattie, a software agent with the same domain and tasks whose architecture included a consciousness mechanism à la GWT. Conscious Mattie was the first functionally, though not phenomenally, conscious software agent. Conscious Mattie gave rise to IDA.

IDA (Intelligent Distribution Agent) was developed for the US Navy [21] [22] [23] to fulfill tasks performed by human resource personnel called detailers. At the end of each sailor's tour of duty, he or she is assigned to a new billet. This assignment process is called distribution. The Navy employs almost 300 full time detailers to effect these new assignments. IDA's task is to facilitate this process, by automating the role of detailer. IDA was tested by former detailers and accepted by the Navy. Various Navy agencies supported the IDA project to the tune of some $1,500,000.

The LIDA (Learning IDA) architecture was originally spawned from IDA by the addition of several styles and modes of learning, [24] [25] [26] but has since then grown to become a much larger and generic software framework. [27] [28]

Footnotes

  1. Hofstadter, D. (1995). Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books.
  2. Marshall, J. (2002). Metacat: A self-watching cognitive architecture for analogy-making. In W. D. Gray & C. D. Schunn (eds.), Proceedings of the 24th Annual Conference of the Cognitive Science Society, pp. 631-636. Mahwah, NJ: Lawrence Erlbaum Associates
  3. Kanerva, P. (1988). Sparse Distributed Memory. Cambridge MA: The MIT Press
  4. Rao, R. P. N., & Fuentes, O. (1998). Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory Archived 2017-08-10 at the Wayback Machine . Machine Learning, 31, 87-113
  5. Drescher, G.L. (1991). Made-up minds: A Constructivist Approach to Artificial Intelligence
  6. Chaput, H. H., Kuipers, B., & Miikkulainen, R. (2003). Constructivist Learning: A Neural Implementation of the Schema Mechanism. Paper presented at the Proceedings of WSOM '03: Workshop for Self-Organizing Maps, Kitakyushu, Japan
  7. Maes, P. 1989. How to do the right thing. Connection Science 1:291-323
  8. Tyrrell, T. (1994). An Evaluation of Maes's Bottom-Up Mechanism for Behavior Selection. Adaptive Behavior, 2, 307-348
  9. Brooks, R.A. Intelligence without Representation. Artificial intelligence, 1991. Elsevier
  10. Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
  11. Franklin, S., Ramamurthy, U., D'Mello, S., McCauley, L., Negatu, A., Silva R., & Datla, V. (2007). LIDA: A computational model of global workspace theory and developmental learning. In AAAI Fall Symposium on AI and Consciousness: Theoretical Foundations and Current Approaches. Arlington, VA: AAAI
  12. Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge University Press
  13. Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. Cambridge, Massachusetts: MIT Press
  14. Barsalou, L. W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22:577–609. MA: The MIT Press
  15. Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. A. Bower (Ed.), The Psychology of Learning and Motivation (pp. 47–89). New York: Academic Press
  16. Glenberg, A. M. 1997. What memory is for. Behavioral and Brain Sciences 20:1–19
  17. Ericsson, K. A., and W. Kintsch. 1995. Long-term working memory. Psychological Review 102:21–245
  18. Sloman, A. 1999. What Sort of Architecture is Required for a Human-like Agent? In Foundations of Rational Agency, ed. M. Wooldridge, and A. Rao. Dordrecht, Netherlands: Kluwer Academic Publishers
  19. Franklin, S., & Graesser, A., 1997. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, published as Intelligent Agents III, Springer-Verlag, 1997, 21-35
  20. Franklin, S., Graesser, A., Olde, B., Song, H., & Negatu, A. (1996, Nov). Virtual Mattie—an Intelligent Clerical Agent. Paper presented at the Symposium on Embodied Cognition and Action: AAAI, Cambridge, Massachusetts.
  21. Franklin, S., Kelemen, A., & McCauley, L. (1998). IDA: A Cognitive Agent Architecture IEEE Conf on Systems, Man and Cybernetics (pp. 2646–2651 ): IEEE Press
  22. Franklin, S. (2003). IDA: A Conscious Artifact? Journal of Consciousness Studies, 10, 47–66
  23. Franklin, S., & McCauley, L. (2003). Interacting with IDA. In H. Hexmoor, C. Castelfranchi & R. Falcone (Eds.), Agent Autonomy (pp. 159–186 ). Dordrecht: Kluwer
  24. D'Mello, Sidney K., Ramamurthy, U., Negatu, A., & Franklin, S. (2006). A Procedural Learning Mechanism for Novel Skill Acquisition. In T. Kovacs & James A. R. Marshall (Eds.), Proceeding of Adaptation in Artificial and Biological Systems, AISB'06 (Vol. 1, pp. 184–185). Bristol, England: Society for the Study of Artificial Intelligence and the Simulation of Behaviour
  25. Franklin, S. (2005, March 21–23, 2005). Perceptual Memory and Learning: Recognizing, Categorizing, and Relating. Paper presented at the Symposium on Developmental Robotics: American Association for Artificial Intelligence (AAAI), Stanford University, Palo Alto CA, USA
  26. Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
  27. Franklin, S., & McCauley, L. (2004). Feelings and Emotions as Motivators and Learning Facilitators Architectures for Modeling Emotion: Cross-Disciplinary Foundations, AAAI 2004 Spring Symposium Series (Vol. Technical Report SS-04-02 pp. 48–51). Stanford University, Palo Alto, California, USA: American Association for Artificial Intelligence
  28. Negatu, A., D'Mello, Sidney K., & Franklin, S. (2007). Cognitively Inspired Anticipation and Anticipatory Learning Mechanisms for Autonomous Agents. In M. V. Butz, O. Sigaud, G. Pezzulo & G. O. Baldassarre (Eds.), Proceedings of the Third Workshop on Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2006) (pp. 108-127). Rome, Italy: Springer Verlag

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is a field related to artificial intelligence and cognitive robotics. Artificial consciousness is the consciousness hypothesized to be possible in an artificial intelligence.

A cognitive model is an approximation of one or more cognitive processes in humans or other animals for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks. In terms of information processing, cognitive modeling is modeling of human perception, reasoning, memory and action.

Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is now maintained and developed by John Laird's research group at the University of Michigan.

ACT-R is a cognitive architecture mainly developed by John Robert Anderson and Christian Lebiere at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations.

<span class="mw-page-title-main">Stan Franklin</span> American scientist (1931–2023)

Stan Franklin was an American scientist. He was the W. Harry Feinstone Interdisciplinary Research Professor at the University of Memphis in Memphis, Tennessee, and co-director of the Institute of Intelligent Systems. He is the author of Artificial Minds, and the developer of IDA and its successor LIDA, both computational implementations of Global Workspace Theory. He is founder of the Cognitive Computing Research Group at the University of Memphis.

Global workspace theory (GWT) is a simple cognitive architecture that has been developed to account qualitatively for a large set of matched pairs of conscious and unconscious processes. It was proposed by Bernard Baars. Brain interpretations and computational simulations of GWT are the focus of current research.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

<span class="mw-page-title-main">Copycat (software)</span>

Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the Center for Research on Concepts and Cognition, Indiana University Bloomington. The original Copycat was written in Common Lisp and is bitrotten ; however, Java and Python ports exist. The latest version in 2018 is a Python3 port by Lucas Saldyt and J. Alan Brogan.

<span class="mw-page-title-main">Stephen Grossberg</span> American scientist (born 1939)

Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.

Cognitive Robotics or Cognitive Technology is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition, consisting of Robotic Process Automation, Artificial Intelligence, Machine Learning, Deep Learning, Optical Character Recognition, Image Processing, Process Mining, Analytics, Software Development and System Integration.

Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior.

<span class="mw-page-title-main">Outline of artificial intelligence</span> Overview of and topical guide to artificial intelligence

The following outline is provided as an overview of and topical guide to artificial intelligence:

Psi-theory, developed by Dietrich Dörner at the University of Bamberg, is a systemic psychological theory covering human action regulation, intention selection and emotion. It models the human mind as an information processing agent, controlled by a set of basic physiological, social and cognitive drives. Perceptual and cognitive processing are directed and modulated by these drives, which allow the autonomous establishment and pursuit of goals in an open environment.

<span class="mw-page-title-main">Basic science (psychology)</span> Subdisciplines within psychology

Some of the research that is conducted in the field of psychology is more "fundamental" than the research conducted in the applied psychological disciplines, and does not necessarily have a direct application. The subdisciplines within psychology that can be thought to reflect a basic-science orientation include biological psychology, cognitive psychology, neuropsychology, and so on. Research in these subdisciplines is characterized by methodological rigor. The concern of psychology as a basic science is in understanding the laws and processes that underlie behavior, cognition, and emotion. Psychology as a basic science provides a foundation for applied psychology. Applied psychology, by contrast, involves the application of psychological principles and theories yielded up by the basic psychological sciences; these applications are aimed at overcoming problems or promoting well-being in areas such as mental and physical health and education.

<span class="mw-page-title-main">Situated approach (artificial intelligence)</span> Concept in artificial intelligence research

In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.

The Troland Research Awards are an annual prize given by the United States National Academy of Sciences to two researchers in recognition of psychological research on the relationship between consciousness and the physical world. The areas where these award funds are to be spent include but are not limited to areas of experimental psychology, the topics of sensation, perception, motivation, emotion, learning, memory, cognition, language, and action. The award preference is given to experimental work with a quantitative approach or experimental research seeking physiological explanations.

William J. Clancey is a computer scientist who specializes in cognitive science and artificial intelligence. He has worked in computing in a wide range of sectors, including medicine, education, and finance, and had performed research that brings together cognitive and social science to study work practices and examine the design of agent systems. Clancey has been described as having developed “some of the earliest artificial intelligence programs for explanation, the critiquing method of consultation, tutorial discourse, and student modeling,” and his research has been described as including “work practice modeling, distributed multiagent systems, and the ethnography of field science.” He has also participated in Mars Exploration Rover mission operations, “simulation of a day-in-the-life of the ISS, knowledge management for future launch vehicles, and developing flight systems that make automation more transparent.” Clancey’s work on "heuristic classification" and "model construction operators" is regarded as having been influential in the design of expert systems and instructional programs.

The Dehaene–Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a part of Bernard Baars's "global workspace model" for consciousness.