Planner (programming language)

Last updated
Planner
Paradigm Multi-paradigm: logic, procedural
Designed by Carl Hewitt
First appeared1969;54 years ago (1969)
Major implementations
Micro-planner, Pico-Planner, Popler, PICO-PLANNER
Dialects
QA4, Conniver, QLISP, Ether
Influenced
Prolog, Smalltalk

Planner (often seen in publications as "PLANNER" although it is not an acronym) is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. [1] Derivations such as QA4, Conniver, QLISP and Ether (see scientific community metaphor) were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).

Contents

Procedural approach versus logical approach

The two major paradigms for constructing semantic software systems were procedural and logical. The procedural paradigm was epitomized by Lisp [2] which featured recursive procedures that operated on list structures.

The logical paradigm was epitomized by uniform proof procedure resolution-based derivation (proof) finders. [3] According to the logical paradigm it was “cheating” to incorporate procedural knowledge. [4]

Procedural embedding of knowledge

Planner was invented for the purposes of the procedural embedding of knowledge [5] and was a rejection of the resolution uniform proof procedure paradigm, [6] which

  1. Converted everything to clausal form. Converting all information to clausal form is problematic because it hides the underlying structure of the information.
  2. Then used resolution to attempt to obtain a proof by contradiction by adding the clausal form of the negation of the theorem to be proved. Using only resolution as the rule of inference is problematical because it hides the underlying structure of proofs. Also, using proof by contradiction is problematical because the axiomatizations of all practical domains of knowledge are inconsistent in practice.

Planner was a kind of hybrid between the procedural and logical paradigms because it combined programmability with logical reasoning. Planner featured a procedural interpretation of logical sentences where an implication of the form (P implies Q) can be procedurally interpreted in the following ways using pattern-directed invocation:

  1. Forward chaining (antecedently):
    If assert P, assert Q
    If assert not Q, assert not P
  2. Backward chaining (consequently)
    If goal Q, goal P
    If goal not P, goal not Q

In this respect, the development of Planner was influenced by natural deductive logical systems (especially the one by Frederic Fitch [1952]).

Micro-planner implementation

A subset called Micro-Planner was implemented by Gerry Sussman, Eugene Charniak and Terry Winograd [7] and was used in Winograd's natural-language understanding program SHRDLU, Eugene Charniak's story understanding work, Thorne McCarty's work on legal reasoning, and some other projects. This generated a great deal of excitement in the field of AI. It also generated controversy because it proposed an alternative to the logic approach that had been one of the mainstay paradigms for AI.

At SRI International, Jeff Rulifson, Jan Derksen, and Richard Waldinger developed QA4 which built on the constructs in Planner and introduced a context mechanism to provide modularity for expressions in the database. Earl Sacerdoti and Rene Reboh developed QLISP, an extension of QA4 embedded in INTERLISP, providing Planner-like reasoning embedded in a procedural language and developed in its rich programming environment. QLISP was used by Richard Waldinger and Karl Levitt for program verification, by Earl Sacerdoti for planning and execution monitoring, by Jean-Claude Latombe for computer-aided design, by Nachum Dershowitz for program synthesis, by Richard Fikes for deductive retrieval, and by Steven Coles for an early expert system that guided use of an econometric model.

Computers were expensive. They had only a single slow processor and their memories were very small by comparison with today. So Planner adopted some efficiency expedients including the following:

The genesis of Prolog

Gerry Sussman, Eugene Charniak, Seymour Papert and Terry Winograd visited the University of Edinburgh in 1971, spreading the news about Micro-Planner and SHRDLU and casting doubt on the resolution uniform proof procedure approach that had been the mainstay of the Edinburgh Logicists. At the University of Edinburgh, Bruce Anderson implemented a subset of Micro-Planner called PICO-PLANNER, [9] and Julian Davies (1973) implemented essentially all of Planner.

According to Donald MacKenzie, Pat Hayes recalled the impact of a visit from Papert to Edinburgh, which had become the "heart of artificial intelligence's Logicland," according to Papert's MIT colleague, Carl Hewitt. Papert eloquently voiced his critique of the resolution approach dominant at Edinburgh "…and at least one person upped sticks and left because of Papert." [10]

The above developments generated tension among the Logicists at Edinburgh. These tensions were exacerbated when the UK Science Research Council commissioned Sir James Lighthill to write a report on the AI research situation in the UK. The resulting report [ Lighthill 1973; McCarthy 1973] was highly critical although SHRDLU was favorably mentioned.

Pat Hayes visited Stanford where he learned about Planner. When he returned to Edinburgh, he tried to influence his friend Bob Kowalski to take Planner into account in their joint work on automated theorem proving. "Resolution theorem-proving was demoted from a hot topic to a relic of the misguided past. Bob Kowalski doggedly stuck to his faith in the potential of resolution theorem proving. He carefully studied Planner.”. [11] Kowalski [1988] states "I can recall trying to convince Hewitt that Planner was similar to SL-resolution." But Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm. Colmerauer and Roussel recalled their reaction to learning about Planner in the following way:

"While attending an IJCAI convention in September ‘71 with Jean Trudel, we met Robert Kowalski again and heard a lecture by Terry Winograd on natural language processing. The fact that he did not use a unified formalism left us puzzled. It was at this time that we learned of the existence of Carl Hewitt’s programming language, Planner. The lack of formalization of this language, our ignorance of Lisp and, above all, the fact that we were absolutely devoted to logic meant that this work had little influence on our later research." [12]

In the fall of 1972, Philippe Roussel implemented a language called Prolog (an abbreviation for PROgrammation en LOGique – French for "programming in logic"). Prolog programs are generically of the following form (which is a special case of the backward-chaining in Planner):

When goal Q, goal P1and ... and goal Pn

Prolog duplicated the following aspects of Micro-Planner:

Prolog also duplicated the following capabilities of Micro-Planner which were pragmatically useful for the computers of the era because they saved space and time:

Use of the Unique Name Assumption and Negation as Failure became more questionable when attention turned to Open Systems. [13]

The following capabilities of Micro-Planner were omitted from Prolog:

Prolog did not include negation in part because it raises implementation issues. Consider for example if negation were included in the following Prolog program:

not Q.
Q  :- P.

The above program would be unable to prove not P even though it follows by the rules of mathematical logic. This is an illustration of the fact that Prolog (like Planner) is intended to be a programming language and so does not (by itself) prove many of the logical consequences that follow from a declarative reading of its programs.

The work on Prolog was valuable in that it was much simpler than Planner. However, as the need arose for greater expressive power in the language, Prolog began to include many of the capabilities of Planner that were left out of the original version of Prolog.

Related Research Articles

Logic programming is a programming, database and knowledge-representation and reasoning paradigm which is based on formal logic. A program, database or knowledge base in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:

Prolog is a logic programming language associated with artificial intelligence and computational linguistics.

<span class="mw-page-title-main">Gerald Jay Sussman</span> American computer scientist

Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.

In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

The Fifth Generation Computer Systems was a 10-year initiative begun in 1982 by Japan's Ministry of International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. FGCS was ahead of its time, and its excessive ambitions led to commercial failure. However, on a theoretical level, the project spurred the development of concurrent logic programming.

In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s.

<span class="mw-page-title-main">Robert Kowalski</span> British computer scientist (born 1941)

Robert Anthony Kowalski is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.

In computer science, the scientific community metaphor is a metaphor used to aid understanding scientific communities. The first publications on the scientific community metaphor in 1981 and 1982 involved the development of a programming language named Ether that invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation.

Patrick John Hayes FAAAI is a British computer scientist who lives and works in the United States. As of March 2006, he is a senior research scientist at the Institute for Human and Machine Cognition in Pensacola, Florida.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which gives rise to indeterminacy.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were investigation of the basic power of the model, study of issues of compositionality, development of architectures, and application to Open systems. It is the follow on article to Actor model middle history which reports on the initial implementations, initial applications, and development of the first proof theory and denotational model.

A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given. An example of a deductive language is Prolog, or its database-query cousin, Datalog.

<span class="mw-page-title-main">John Alan Robinson</span> American computer scientist

John Alan Robinson was a philosopher, mathematician, and computer scientist. He was a professor emeritus at Syracuse University.

Richard Jay Waldinger is a computer science researcher at SRI International's Artificial Intelligence Center whose interests focus on the application of automated deductive reasoning to problems in software engineering and artificial intelligence.

<span class="mw-page-title-main">Carl Hewitt</span> American computer scientist; Planner programming languagedesigner (1944-2022)

Carl Eddie Hewitt was an American computer scientist who designed the Planner programming language for automated planning and the actor model of concurrent computation, which have been influential in the development of logic, functional and object-oriented programming. Planner was the first programming language based on procedural plans invoked using pattern-directed invocation from assertions and goals. The actor model influenced the development of the Scheme programming language, the π-calculus, and served as an inspiration for several other programming languages.

The history of the programming language Scheme begins with the development of earlier members of the Lisp family of languages during the second half of the twentieth century. During the design and development period of Scheme, language designers Guy L. Steele and Gerald Jay Sussman released an influential series of Massachusetts Institute of Technology (MIT) AI Memos known as the Lambda Papers (1975–1980). This resulted in the growth of popularity in the language and the era of standardization from 1990 onward. Much of the history of Scheme has been documented by the developers themselves.

References

  1. Carl Hewitt Middle History of Logic Programming: Resolution, Planner, Prolog and the Japanese Fifth Generation Project ArXiv 2009. arXiv : 0904.3036
  2. McCarthy et al. 1962
  3. Robinson 1965
  4. Green 1969
  5. Hewitt 1971
  6. Robinson 1965
  7. Sussman, Charniak, and Winograd 1971
  8. Golomb and Baumert 1965
  9. Anderson 1972
  10. MacKenzie 2001 p 82.
  11. Bruynooghe, Pereira, Siekmann, and van Emden [2004]
  12. Colmerauer and Roussel 1996
  13. Hewitt and de Jong 1983, Hewitt 1985, Hewitt and Inman 1991

Bibliography

  • Bruce Anderson. Documentation for LIB PICO-PLANNER School of Artificial Intelligence, Edinburgh University. 1972
  • Bruce Baumgart. Micro-Planner Alternate Reference Manual Stanford AI Lab Operating Note No. 67, April 1972.
  • Coles, Steven (1975), "The Application of Artificial Intelligence to Heuristic Modeling", 2nd US-Japan Computer Conference.
  • Fikes, Richard (1975), Deductive Retrieval Mechanisms for State Description Models, IJCAI.
  • Fitch, Frederic (1952), Symbolic Logic: an Introduction, New York: Ronald Press.
  • Green, Cordell (1969), "Application of Theorem Proving to Problem Solving", IJCAI.
  • Hewitt, Carl (1969). "PLANNER: A Language for Proving Theorems in Robots". IJCAI. CiteSeerX   10.1.1.80.756 .
  • Hewitt, Carl (1971), "Procedural Embedding of Knowledge In Planner", IJCAI.
  • Carl Hewitt. "The Challenge of Open Systems" Byte Magazine. April 1985
  • Carl Hewitt and Jeff Inman. "DAI Betwixt and Between: From ‘Intelligent Agents’ to Open Systems Science" IEEE Transactions on Systems, Man, and Cybernetics. Nov/Dec 1991.
  • Carl Hewitt and Gul Agha. "Guarded Horn clause languages: are they deductive and Logical?" International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991.
  • Hewitt, Carl (March 2006), The repeated demise of logic programming and why it will be reincarnated – What Went Wrong and Why: Lessons from AI Research and Applications (PDF), Technical Report, AAAI Press, archived from the original (PDF) on 2017-12-10.
  • William Kornfeld and Carl Hewitt. The Scientific Community Metaphor MIT AI Memo 641. January 1981.
  • Bill Kornfeld and Carl Hewitt. "The Scientific Community Metaphor" IEEE Transactions on Systems, Man, and Cybernetics. January 1981.
  • Bill Kornfeld. "The Use of Parallelism to Implement a Heuristic Search" IJCAI 1981.
  • Bill Kornfeld. "Parallelism in Problem Solving" MIT EECS Doctoral Dissertation. August 1981.
  • Bill Kornfeld. "Combinatorially Implosive Algorithms" CACM. 1982
  • Robert Kowalski. "The Limitations of Logic" Proceedings of the 1986 ACM fourteenth annual conference on Computer science.
  • Robert Kowalski. "The Early Years of Logic Programming" CACM January 1988.
  • Latombe, Jean-Claude (1976), "Artificial Intelligence in Computer-Aided Design", CAD Systems, North-Holland.
  • McCarthy, John; Abrahams, Paul; Edwards, Daniel; Hart, Timothy; Levin, Michael (1962), Lisp 1.5 Programmer's Manual, MIT Computation Center and Research Laboratory of Electronics.
  • Robinson, John Alan (1965), "A Machine-Oriented Logic Based on the Resolution Principle", Communications of the ACM, doi: 10.1145/321250.321253 .
  • Gerry Sussman and Terry Winograd. Micro-planner Reference Manual AI Memo No, 203, MIT Project MAC, July 1970.
  • Terry Winograd. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language MIT AI TR-235. January 1971.
  • Gerry Sussman, Terry Winograd and Eugene Charniak. Micro-Planner Reference Manual (Update) AI Memo 203A, MIT AI Lab, December 1971.
  • Carl Hewitt. Description and Theoretical Analysis (Using Schemata) of Planner, A Language for Proving Theorems and Manipulating Models in a Robot AI Memo No. 251, MIT Project MAC, April 1972.
  • Eugene Charniak. Toward a Model of Children's Story Comprehension MIT AI TR-266. December 1972.
  • Julian Davies. Popler 1.6 Reference Manual University of Edinburgh, TPU Report No. 1, May 1973.
  • Jeff Rulifson, Jan Derksen, and Richard Waldinger. "QA4, A Procedural Calculus for Intuitive Reasoning" SRI AI Center Technical Note 73, November 1973.
  • Scott Fahlman. "A Planning System for Robot Construction Tasks" MIT AI TR-283. June 1973
  • James Lighthill. "Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973.
  • John McCarthy. "Review of ‘Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973.
  • Robert Kowalski "Predicate Logic as Programming Language" Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973
  • Pat Hayes. Computation and Deduction Mathematical Foundations of Computer Science: Proceedings of Symposium and Summer School, Štrbské Pleso, High Tatras, Czechoslovakia, September 3–8, 1973.
  • Carl Hewitt, Peter Bishop and Richard Steiger. "A Universal Modular Actor Formalism for Artificial Intelligence" IJCAI 1973.
  • L. Thorne McCarty. "Reflections on TAXMAN: An Experiment on Artificial Intelligence and Legal Reasoning" Harvard Law Review. Vol. 90, No. 5, March 1977
  • Drew McDermott and Gerry Sussman. The Conniver Reference Manual MIT AI Memo 259A. January 1974.
  • Earl Sacerdoti, et al., "QLISP A Language for the Interactive Development of Complex Systems" AFIPS. 1976
  • Sacerdoti, Earl (1977), A Structure for Plans and Behavior, Elsevier North-Holland.
  • Waldinger, Richard; Levitt, Karl (1974), Reasoning About Programs Artificial Intelligence.