In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
Planning problems are specified in the hierarchical task network approach by providing a set of tasks, which can be:
A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints.
A primitive task is an action that can be executed directly given the state in which it is executed supports its precondition. A compound task is a complex task composed of a partially ordered set of further tasks, which can either be primitive or abstract. A goal task is a task of satisfying a condition. The difference between primitive and other tasks is that the primitive actions can be directly executed. Compound and goal tasks both require a sequence of primitive actions to be performed; however, goal tasks are specified in terms of conditions that have to be made true, while compound tasks can only be specified in terms of other tasks via the task network outlined below.
Constraints among tasks are expressed in the form of networks, called (hierarchical) task networks. A task network is a set of tasks and constraints among them. Such a network can be used as the precondition for another compound or goal task to be feasible. This way, one can express that a given task is feasible only if a set of other actions (those mentioned in the network) are done, and they are done in such a way that the constraints among them (specified by the network) are satisfied. One particular formalism for representing hierarchical task networks that has been fairly widely used is TAEMS.
Some of the best-known domain-independent HTN-planning systems are:
HTN planning is strictly more expressive than STRIPS, to the point of being undecidable in the general case. [10] However, many syntactic restrictions of HTN planning are decidable, with known complexities ranging from NP-complete to 2-EXPSPACE-complete, [11] and some HTN problems can be efficiently compiled into PDDL, a STRIPS-like language. [12]
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. It was first developed by Drew McDermott and his colleagues in 1998 mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems.
The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner.
James Alexander Hendler is an artificial intelligence researcher at Rensselaer Polytechnic Institute, United States, and one of the originators of the Semantic Web. He is a Fellow of the National Academy of Public Administration.
The blocks world is a planning domain in artificial intelligence. The algorithm is similar to a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at a time: it may either be placed on the table or placed atop another block. Because of this, any blocks that are, at a given time, under another block cannot be moved. Moreover, some kinds of blocks cannot have other blocks stacked on top of them.
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning.
In artificial intelligence, action description language (ADL) is an automated planning and scheduling system in particular for robots. It is considered an advancement of STRIPS. Edwin Pednault proposed this language in 1987. It is an example of an action language.
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
Psi-theory, developed by Dietrich Dörner at the University of Bamberg, is a systemic psychological theory covering human action regulation, intention selection and emotion. It models the human mind as an information processing agent, controlled by a set of basic physiological, social and cognitive drives. Perceptual and cognitive processing are directed and modulated by these drives, which allow the autonomous establishment and pursuit of goals in an open environment.
GOAL is an agent programming language for programming cognitive agents. GOAL agents derive their choice of action from their beliefs and goals. The language provides the basic building blocks to design and implement cognitive agents by programming constructs that allow and facilitate the manipulation of an agent's beliefs and goals and to structure its decision-making. The language provides an intuitive programming framework based on common sense or practical reasoning.
Dana S. Nau is a Professor of Computer Science and Systems Research at the University of Maryland Department of Computer Science in College Park, where he has done research in automated planning and scheduling, game theory, cognitive science, and computer-aided engineering. He has many PHD students, including Qiang Yang who graduated in 1989. He has more than 300 publications and several best-paper awards. Some of his accomplishments include the discovery of game tree pathology, the development of the SHOP and SHOP2 HTN planning systems, and the book Automated Planning: Theory and Practice (ISBN 1-55860-856-7). He is a Fellow of the AAAI and in 2022 he was elected as a Fellow of the AAAS.
In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.
Partial-order planning is an approach to automated planning that maintains a partial ordering between actions and only commits ordering between actions when forced to, that is, ordering of actions is partial. Also this planning doesn't specify which action will come out first when two actions are processed. By contrast, total-order planning maintains a total ordering between all actions at every stage of planning. Given a problem in which some sequence of actions is needed to achieve a goal, a partial-order plan specifies all actions that must be taken, but specifies an ordering between actions only where needed.
In philosophy, a process ontology refers to a universal model of the structure of the world as an ordered wholeness. Such ontologies are fundamental ontologies, in contrast to the so-called applied ontologies. Fundamental ontologies do not claim to be accessible to any empirical proof in itself but to be a structural design pattern, out of which empirical phenomena can be explained and put together consistently. Throughout Western history, the dominating fundamental ontology is the so-called substance theory. However, fundamental process ontologies have become more important in recent times, because the progress in the discovery of the foundations of physics has spurred the development of a basic concept able to integrate such boundary notions as "energy," "object", and those of the physical dimensions of space and time.
In artificial intelligence, preference-based planning is a form of automated planning and scheduling which focuses on producing plans that additionally satisfy as many user-specified preferences as possible. In many problem domains, a task can be accomplished by various sequences of actions. These plans can vary in quality: there can be many ways to solve a problem, but preferred generally are ways more, e.g., cost-effective, quick, and safe.
In computer science, an action language is a language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning.
Action model learning is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.
GOLOG is a high-level logic programming language for the specification and execution of complex actions in dynamical domains. It is based on the situation calculus. It is a first-order logical language for reasoning about action and change. GOLOG was developed at the University of Toronto.