Deliberative agent (also known as intentional agent) is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via symbolic reasoning".
Compared to reactive agents, which are able to reach their goal only by reacting reflexively on external stimuli, a deliberative agent's internal processes are more complex. The difference lies in fact, that deliberative agent maintains a symbolic representation of the world it inhabits.In other words, it possesses internal image of the external environment and is thus capable to plan its actions. Most commonly used architecture for implementing such behavior is Belief-Desire-Intention software model (BDI), where an agent's beliefs about the world (its image of a world), desires (goal) and intentions are internally represented and practical reasoning is applied to decide, which action to select.
There has been considerable research focused on integrating both reactive and deliberative agent strategies resulting in developing a compound called hybrid agent, which combines extensive manipulation with nontrivial symbolic structures and reflexive reactive responses to the external events.
It has already been mentioned, that deliberative agents possess a) inherent image of an outer world and b) goal to achieve and is thus able to produce a list of actions (plan) to reach the goal. In unfavorable conditions, when the plan is no more applicable, agent is usually able to recompute it.
The process of plan computing (or recomputing) is as follows:
The deliberative agent requires symbolic representation with compositional semantics (e. g. data tree) in all major functions, for its deliberation is not limited to present facts, but construes hypotheses about possible future states and potentially also holds information about past (i.e. memory). These hypothetic states involve goals, plans, partial solutions, hypothetical states of the agent's beliefs, etc. It is evident, that deliberative process may become considerably complex and hardware killing.
Since the early 1970, the AI planning community has been involved in developing artificial planning agent (a predecessor of a deliberative agent), which would be able to choose a proper plan leading to a specified goal.These early attempts resulted in constructing simple planning system called STRIPS. It soon became obvious that STRIPS concept needed further improvement, for it was unable to effectively solve problems of even moderate complexity. In spite of considerable effort to raise the efficiency (for example by implementing hierarchical and non-linear planning), the system remained somewhat weak while working with any time-constrained system.
More successful attempts have been made in late 1980s to design planning agents. For example, the IPEM (Integrated Planning, Execution and Monitoring system) had a sophisticated non-linear planner embedded. Further, Wood's AUTODRIVE simulated a behavior of deliberative agents in a traffic and Cohen's PHOENIX system was construed to simulate a forest fire management.
In 1976, Simon and Newell formulated the Physical Symbol System hypothesis,which claims, that both human and artificial intelligence have the same principle - symbol representation and manipulation. According to the hypothesis it follows, that there is no substantial difference between human and machine in intelligence, but just quantitative and structural - machines are much less complex. Such a provocative proposition must have become the object of serious criticism and raised a wide discussion, but the problem itself still remains unsolved in its merit until these days.
Further development of classical symbolic AI proved not to be dependent on final verifying the Physical Symbol System hypothesis at all. In 1988, Bratman, Israel and Pollack introduced Intelligent Resource-bounded Machine Architecture (IRMA), the first system implementing the Belief-Desire-Intention software model (BDI). IRMA exemplifies the standard idea of deliberative agent as it is known today: a software agent embedding the symbolic representation and implementing the BDI.
Above-mentioned troubles with symbolic AI have led to serious doubts about the viability of such a concept, which resulted in developing a reactive architecture, which is based on wholly different principles. Developers of the new architecture have rejected using symbolic representation and manipulation as a base of any artificial intelligence. Reactive agents achieve their goals simply through reactions on changing environment, which implies reasonable computational modesty.
Even though deliberative agents consume much more system resources than their reactive colleagues, their results are significantly better just in few special situations, whereas it is usually possible to replace one deliberative agent with few reactive ones in many cases, without losing a substantial deal of the simulation result's adequacy.It seems that classical deliberative agents may be usable especially where correct action is required, for their ability to produce optimal, domain-independent solution. Deliberative agent often fails in changing environment, for it is unable to re-plan its actions quickly enough.
Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
Symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s.
Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is now maintained and developed by John Laird's research group at the University of Michigan.
The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans and executing those plans. A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals, upon an environment using observation through sensors and consequent actuators. Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
Cognitive robotics is concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition.
Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior.
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often exploit reactive plans, which are stored structures describing the agent's priorities and behaviour.
Keith Leonard Clark is an Emeritus Professor in the Department of Computing at Imperial College London, England.
In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time reasoning systems that can perform complex tasks in dynamic environments. It is based on the notion of a rational agent or intelligent agent using the belief–desire–intention software model.
GOAL is an agent programming language for programming cognitive agents. GOAL agents derive their choice of action from their beliefs and goals. The language provides the basic building blocks to design and implement cognitive agents by programming constructs that allow and facilitate the manipulation of an agent's beliefs and goals and to structure its decision-making. The language provides an intuitive programming framework based on common sense or practical reasoning.
AgentSpeak is an agent-oriented programming language. It is based on logic programming and the belief–desire–intention software model (BDI) architecture for (cognitive) autonomous agents. The language was originally called AgentSpeak(L), but became more popular as AgentSpeak, a term that is also used to refer to the variants of the original language.
JACK Intelligent Agents is a framework in Java for multi-agent system development. JACK Intelligent Agents was built by Agent Oriented Software Pty. Ltd. (AOS) and is a third generation agent platform building on the experiences of the Procedural Reasoning System (PRS) and Distributed Multi-Agent Reasoning System (dMARS). JACK is one of the few multi-agent systems that uses the BDI software model and provides its own Java-based plan language and graphical planning tools.
In artificial intelligence, the distributed multi-agent reasoning system (dMARS) was a platform for intelligent software agents developed at the AAII that makes uses of the belief–desire–intention software model (BDI). The design for dMARS was an extension of the intelligent agent cognitive architecture developed at SRI International called procedural reasoning system (PRS). The most recent incarnation of this framework is the JACK Intelligent Agents platform.
Michael Peter Georgeff is a computer scientist and entrepreneur who has made contributions in the areas of Intelligent Software Agents and eHealth.
In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.
Agent-oriented programming (AOP) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to object-oriented programming which has objects at its core, AOP has externally specified agents at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving "agents", in a way specific to its class of agents.
Franciscus Petrus Maria (Frank) Dignum is a Dutch computer scientist. He is currently a Professor of Socially-Aware AI at Umeå University and an Associate Professor at the Department of Information and Computing Sciences of the Utrecht University. Dignum is best known from his work on software agents, multi-agent systems and fundamental aspects of social agents.
Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, design, cognitive sciences and psychology.