The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. [1] It was first developed by Drew McDermott and his colleagues in 1998 mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems. [2]
PDDL is a human-readable format for problems in automated planning that gives a description of the possible states of the world, a description of the set of possible actions, a specific initial state of the world, and a specific set of desired goals. Action descriptions include the prerequisites of the action and the effects of the action. PDDL separates the model of the planning problem into two major parts: (1) a domain description of those elements that are present in every problem of the problem domain, and (2) the problem description which determines the specific planning problem. The problem description includes the initial state and the goals to be accomplished. The example below gives a domain definition and a problem description instance for the automated planning of a robot with two gripper arms.
PDDL becomes the input to planner software, which is usually a domain-independent Artificial Intelligence (AI) planner. PDDL does not describe the output of the planner software, but the output is usually a totally or partially ordered plan, which is a sequence of actions, some of which may be executed in parallel.
The PDDL language was inspired by the Stanford Research Institute Problem Solver (STRIPS) and the Action description language (ADL), among others. The PDDL language uses principles from knowledge representation languages which are used to author ontologies, an example is the Web Ontology Language (OWL). Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.
The latest version of PDDL is described in a BNF (Backus–Naur Form) syntax definition of PDDL 3.1. [3] Several online resources of how to use PDDL are available, [1] [4] [5] [6] [7] and also a book. [8]
This was the official language of the 1st and 2nd IPC in 1998 and 2000 respectively. [9] It separated the model of the planning problem in two major parts: (1)domain description and (2) the related problem description. Such a division of the model allows for an intuitive separation of those elements, which are (1) present in every specific problem of the problem-domain (these elements are contained in the domain-description), and those elements, which (2) determine the specific planning-problem (these elements are contained in the problem-description). Thus several problem-descriptions may be connected to the same domain-description (just as several instances may exist of a class in OOP (Object Oriented Programming) or in OWL (Web Ontology Language) for example). Thus a domain and a connecting problem description forms the PDDL-model of a planning-problem, and eventually this is the input of a planner (usually domain-independent AI planner) software, which aims to solve the given planning-problem via some appropriate planning algorithm. The output of the planner is not specified by PDDL, but it is usually a totally or partially ordered plan (a sequence of actions, some of which may be executed even in parallel sometimes). Now lets take a look at the contents of a PDDL1.2 domain and problem description in general...
(1) The domain description consisted of a domain-name definition, definition of requirements (to declare those model-elements to the planner which the PDDL-model is actually using), definition of object-type hierarchy (just like a class-hierarchy in OOP), definition of constant objects (which are present in every problem in the domain), definition of predicates (templates for logical facts), and also the definition of possible actions (operator-schemas with parameters, which should be grounded/instantiated during execution). Actions had parameters (variables that may be instantiated with objects), preconditions and effects. The effects of actions could be also conditional (when-effects).
(2) The problem description consisted of a problem-name definition, the definition of the related domain-name, the definition of all the possible objects (atoms in the logical universe), initial conditions (the initial state of the planning environment, a conjunction of true/false facts), and the definition of goal-states (a logical expression over facts that should be true/false in a goal-state of the planning environment). Thus eventually PDDL1.2 captured the "physics" of a deterministic single-agent discrete fully accessible planning environment.
This was the official language of the 3rd IPC in 2002. [10] It introduced numeric fluents (e.g. to model non-binary resources such as fuel-level, time, energy, distance, weight, ...), plan-metrics (to allow quantitative evaluation of plans, and not just goal-driven, but utility-driven planning, i.e. optimization, metric-minimization/maximization), and durative/continuous actions (which could have variable, non-discrete length, conditions and effects). Eventually PDDL2.1 allowed the representation and solution of many more real-world problems than the original version of the language.
This was the official language of the deterministic track of the 4th IPC in 2004. [11] It introduced derived predicates (to model the dependency of given facts from other facts, e.g. if A is reachable from B, and B is reachable from C, then A is reachable from C (transitivity)), and timed initial literals (to model exogenous events occurring at given time independently from plan-execution). Eventually PDDL2.2 extended the language with a few important elements, but wasn't a radical evolution compared to PDDL2.1 after PDDL1.2.
This was the official language of the deterministic track of the 5th IPC in 2006. [12] [13] [14] It introduced state-trajectory constraints (hard-constraints in form of modal-logic expressions, which should be true for the state-trajectory produced during the execution of a plan, which is a solution of the given planning problem) and preferences (soft-constraints in form of logical expressions, similar to hard-constraints, but their satisfaction wasn't necessary, although it could be incorporated into the plan-metric e.g. to maximize the number of satisfied preferences, or to just measure the quality of a plan) to enable preference-based planning. Eventually PDDL3.0 updated the expressiveness of the language to be able to cope with recent, important developments in planning.
This was the official language of the deterministic track of the 6th and 7th IPC in 2008 and 2011 respectively. [15] [16] [17] It introduced object-fluents (i.e. functions' range now could be not only numerical (integer or real), but it could be any object-type also). Thus PDDL3.1 adapted the language even more to modern expectations with a syntactically seemingly small, but semantically quite significant change in expressiveness.
The latest version of the language is PDDL3.1. The BNF (Backus–Naur Form) syntax definition of PDDL3.1 can be found among the resources of the IPC-2011 homepage or the IPC-2014 homepage.
This extension of PDDL2.1 from around 2002–2006 provides a more flexible model of continuous change through the use of autonomous processes and events. [2] [18] The key this extension provides is the ability to model the interaction between the agent's behaviour and changes that are initiated by the agent's environment. Processes run over time and have a continuous effect on numeric values. They are initiated and terminated either by the direct action of the agent or by events triggered in the environment. This 3-part structure is referred to as the start-process-stop model. Distinctions are made between logical and numeric states: transitions between logical states are assumed to be instantaneous whilst occupation of a given logical state can endure over time. Thus in PDDL+ continuous update expressions are restricted to occur only in process effects. Actions and events, which are instantaneous, are restricted to the expression of discrete change. This introduces the before mentioned 3-part modelling of periods of continuous change: (1) an action or event starts a period of continuous change on a numeric variable expressed by means of a process; (2) the process realizes the continuous change of the numeric variable; (3) an action or event finally stops the execution of the process and terminates its effect on the numeric variable. Comment: the goals of the plan might be achieved before an active process is stopped.
NDDL (New Domain Definition Language) is NASA's response to PDDL from around 2002. [19] [20] Its representation differs from PDDL in several respects: 1) it uses a variable/value representation (timelines/activities) rather than a propositional/first-order logic, and 2) there is no concept of states or actions, only of intervals (activities) and constraints between those activities. In this respect, models in NDDL look more like schemas for SAT encodings of planning problems rather than PDDL models. Because of the mentioned differences planning and execution of plans (e.g. during critical space missions) may be more robust when using NDDL, but the correspondence to standard planning-problem representations other than PDDL may be much less intuitive than in case of PDDL.
MAPL (Multi-Agent Planning Language, pronounced "maple") is an extension of PDDL2.1 from around 2003. [21] It is a quite serious modification of the original language. It introduces non-propositional state-variables (which may be n-ary: true, false, unknown, or anything else). It introduces a temporal model given with modal operators (before, after, etc.). Nonetheless, in PDDL3.0 a more thorough temporal model was given, which is also compatible with the original PDDL syntax (and it is just an optional addition). MAPL also introduces actions whose duration will be determined in runtime and explicit plan synchronization which is realized through speech act based communication among agents. This assumption may be artificial, since agents executing concurrent plans shouldn't necessarily communicate to be able to function in a multi-agent environment. Finally, MAPL introduces events (endogenous and exogenous) for the sake of handling concurrency of actions. Thus events become part of plans explicitly, and are assigned to agents by a control function, which is also part of the plan.
OPT (Ontology with Polymorphic Types) was a profound extension of PDDL2.1 by Drew McDermott from around 2003–2005 (with some similarities to PDDL+). [22] It was an attempt to create a general-purpose notation for creating ontologies, defined as formalized conceptual frameworks for planning domains about which planning applications are to reason. Its syntax was based on PDDL, but it had a much more elaborate type system, which allowed users to make use of higher-order constructs such as explicit λ-expressions allowing for efficient type inference (i.e. not only domain objects had types (level 0 types), but also the functions/fluents defined above these objects had types in the form of arbitrary mappings (level 1 types), which could be generic, so their parameters (the domain and range of the generic mapping) could be defined with variables, which could have an even higher level type (level 2 type) not to speak of that the mappings could be arbitrary, i.e. the domain or range of a function (e.g. predicate, numeric fluent) could be any level 0/1/2 type. For example, functions could map from arbitrary functions to arbitrary functions...). OPT was basically intended to be (almost) upwardly compatible with PDDL2.1. The notation for processes and durative actions was borrowed mainly from PDDL+ and PDDL2.1, but beyond that OPT offered many other significant extensions (e.g. data-structures, non-Boolean fluents, return-values for actions, links between actions, hierarchical action expansion, hierarchy of domain definitions, the use of namespaces for compatibility with the semantic web).
PPDDL (Probabilistic PDDL) 1.0 was the official language of the probabilistic track of the 4th and 5th IPC in 2004 and 2006 respectively. [23] It extended PDDL2.1 with probabilistic effects (discrete, general probability distributions over possible effects of an action), reward fluents (for incrementing or decrementing the total reward of a plan in the effects of the actions), goal rewards (for rewarding a state-trajectory, which incorporates at least one goal-state), and goal-achieved fluents (which were true, if the state-trajectory incorporated at least one goal-state). Eventually these changes allowed PPDDL1.0 to realize Markov Decision Process (MDP) planning, where there may be uncertainty in the state-transitions, but the environment is fully observable for the planner/agent.
APPL (Abstract Plan Preparation Language) is a newer variant of NDDL from 2006, which is more abstract than most existing planning languages such as PDDL or NDDL. [24] The goal of this language was to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. APPL used the same concepts as NDDL with the extension of actions, and also some other concepts, but still its expressive power is much less than PDDL's (in hope of staying robust and formally verifiable).
RDDL (Relational Dynamic influence Diagram Language) was the official language of the uncertainty track of the 7th IPC in 2011. [25] Conceptually it is based on PPDDL1.0 and PDDL3.0, but practically it is a completely different language both syntactically and semantically. The introduction of partial observability is one of the most important changes in RDDL compared to PPDDL1.0. It allows efficient description of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) by representing everything (state-fluents, observations, actions, ...) with variables. This way RDDL departs from PDDL significantly. Grounded RDDL corresponds to Dynamic Bayesian Networks (DBNs) similarly to PPDDL1.0, but RDDL is more expressive than PPDDL1.0.
MA-PDDL (Multi Agent PDDL) is a minimalistic, modular extension of PDDL3.1 introduced in 2012 (i.e. a new :multi-agent
requirement) that allows planning by and for multiple agents. [26] The addition is compatible with all the features of PDDL3.1 and addresses most of the issues of MAPL. It adds the possibility to distinguish between the possibly different actions of different agents (i.e. different capabilities). Similarly different agents may have different goals and/or metrics. The preconditions of actions now may directly refer to concurrent actions (e.g. the actions of other agents) and thus actions with interacting effects can be represented in a general, flexible way (e.g. suppose that at least 2 agents are needed to execute a lift
action to lift a heavy table into the air, or otherwise the table would remain on the ground (this is an example of constructive synergy, but destructive synergy can be also easily represented in MA-PDDL)). Moreover, as kind of syntactic sugar, a simple mechanism for the inheritance and polymorphism of actions, goals and metrics was also introduced in MA-PDDL (assuming :typing
is declared). Since PDDL3.1 assumes that the environment is deterministic and fully observable, the same holds for MA-PDDL, i.e. every agent can access the value of every state fluent at every time-instant and observe every previously executed action of each agent, and also the concurrent actions of agents unambiguously determine the next state of the environment. This was improved later by the addition of partial-observability and probabilistic effects (again, in form of two new modular requirements, :partial-observability
and :probabilistic-effects
, respectively, the latter being inspired by PPDDL1.0, and both being compatible with all the previous features of the language, including :multi-agent
). [27]
This is the domain definition of a STRIPS instance for the automated planning of a robot with two gripper arms. [28]
(define(domaingripper-strips)(:predicates(room?r)(ball?b)(gripper?g)(at-robby?r)(at?b?r)(free?g)(carry?o?g))(:actionmove:parameters(?from?to):precondition(and(room?from)(room?to)(at-robby?from)):effect(and(at-robby?to)(not(at-robby?from))))(:actionpick:parameters(?obj?room?gripper):precondition(and(ball?obj)(room?room)(gripper?gripper)(at?obj?room)(at-robby?room)(free?gripper)):effect(and(carry?obj?gripper)(not(at?obj?room))(not(free?gripper))))(:actiondrop:parameters(?obj?room?gripper):precondition(and(ball?obj)(room?room)(gripper?gripper)(carry?obj?gripper)(at-robby?room)):effect(and(at?obj?room)(free?gripper)(not(carry?obj?gripper)))))
And this is the problem definition that instantiates the previous domain definition with a concrete environment with two rooms and two balls.
(define(problemstrips-gripper2)(:domaingripper-strips)(:objectsroomaroombball1ball2leftright)(:init(roomrooma)(roomroomb)(ballball1)(ballball2)(gripperleft)(gripperright)(at-robbyrooma)(freeleft)(freeright)(atball1rooma)(atball2rooma))(:goal(atball1roomb)))
In artificial intelligence, with implications for cognitive science, the frame problem describes an issue with using first-order logic to express facts about a robot in the world. Representing the state of a robot with traditional first-order logic requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a first-order logic system, additional axioms are required to make inferences about the environment. The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment.
Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems, and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programing language.
IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation, object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner.
The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation.
The event calculus is a logical theory for representing and reasoning about events and about the way in which they change the state of some real or artificial world. It deals both with action events, which are performed by agents, and with external events, which are outside the control of any agent.
The blocks world is a planning domain in artificial intelligence. The algorithm is similar to a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at a time: it may either be placed on the table or placed atop another block. Because of this, any blocks that are, at a given time, under another block cannot be moved. Moreover, some kinds of blocks cannot have other blocks stacked on top of them.
In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
In artificial intelligence, action description language (ADL) is an automated planning and scheduling system in particular for robots. It is considered an advancement of STRIPS. Edwin Pednault proposed this language in 1987. It is an example of an action language.
In philosophy, a process ontology refers to a universal model of the structure of the world as an ordered wholeness. Such ontologies are fundamental ontologies, in contrast to the so-called applied ontologies. Fundamental ontologies do not claim to be accessible to any empirical proof in itself but to be a structural design pattern, out of which empirical phenomena can be explained and put together consistently. Throughout Western history, the dominating fundamental ontology is the so-called substance theory. However, fundamental process ontologies have become more important in recent times, because the progress in the discovery of the foundations of physics has spurred the development of a basic concept able to integrate such boundary notions as "energy," "object", and those of the physical dimensions of space and time.
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
In computer science, an action language is a language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning.
Action model learning is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.
GOLOG is a high-level logic programming language for the specification and execution of complex actions in dynamical domains. It is based on the situation calculus. It is a first-order logical language for reasoning about action and change. GOLOG was developed at the University of Toronto.
The International Conference on Automated Planning and Scheduling (ICAPS) is a leading international academic conference in automated planning and scheduling held annually for researchers and practitioners in planning and scheduling. ICAPS is supported by the National Science Foundation, the journal Artificial Intelligence, and other supporters.