Logic Theorist

Last updated

Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. [1] It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". [1] [lower-alpha 1] Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica , and found new and shorter proofs for some of them. [3]

Contents

History

In 1955, when Newell and Simon began to work on the Logic Theorist, the field of artificial intelligence did not yet exist. Even the term itself ("artificial intelligence") would not be coined until the following summer. [lower-alpha 2]

Simon was a political scientist who had already produced classic work in the study of how bureaucracies function as well as developing his theory of bounded rationality (for which he would later win a Nobel Prize). The study of business organizations requires, like artificial intelligence, an insight into the nature of human problem solving and decision making. Simon remembers consulting at RAND Corporation in the early 1950s and seeing a printer typing out a map, using ordinary letters and punctuation as symbols. He realized that a machine that could manipulate symbols could just as well simulate decision making and possibly even the process of human thought. [5] [6]

The program that printed the map had been written by Newell, a RAND scientist studying logistics and organization theory. For Newell, the decisive moment was in 1954 when Oliver Selfridge came to RAND to describe his work on pattern matching. Watching the presentation, Newell suddenly understood how the interaction of simple, programmable units could accomplish complex behavior, including the intelligent behavior of human beings. "It all happened in one afternoon," he would later say. [2] [7] It was a rare moment of scientific epiphany.

"I had such a sense of clarity that this was a new path, and one I was going to go down. I haven't had that sensation very many times. I'm pretty skeptical, and so I don't normally go off on a toot, but I did on that one. Completely absorbed in it—without existing with the two or three levels consciousness so that you're working, and aware that you're working, and aware of the consequences and implications, the normal mode of thought. No. Completely absorbed for ten to twelve hours." [8]

Newell and Simon began to talk about the possibility of teaching machines to think. Their first project was a program that could prove mathematical theorems like the ones used in Bertrand Russell and Alfred North Whitehead's Principia Mathematica . They enlisted the help of computer programmer Cliff Shaw, also from RAND, to develop the program. (Newell says "Cliff was the genuine computer scientist of the three" [9] ).

The first version was hand-simulated: they wrote the program onto 3x5 cards and, as Simon recalled:

In January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each one became, in effect, a component of the computer program ... Here was nature imitating art imitating nature. [10]

They succeeded in showing that the program could successfully prove theorems as well as a talented mathematician. Eventually Shaw was able to run the program on the computer at RAND's Santa Monica facility.

In the summer of 1956, John McCarthy, Marvin Minsky, Claude Shannon and Nathan Rochester organized a conference on the subject of what they called "artificial intelligence" (a term coined by McCarthy for the occasion). Newell and Simon proudly presented the group with the Logic Theorist. It was met with a lukewarm reception. Pamela McCorduck writes "the evidence is that nobody save Newell and Simon themselves sensed the long-range significance of what they were doing." [11] Simon confides that "we were probably fairly arrogant about it all" [12] and adds:

They didn't want to hear from us, and we sure didn't want to hear from them: we had something to show them! ... In a way it was ironic because we already had done the first example of what they were after; and second, they didn't pay much attention to it. [13]

Logic Theorist soon proved 38 of the first 52 theorems in chapter 2 of the Principia Mathematica . The proof of theorem 2.85 was actually more elegant than the proof produced laboriously by hand by Russell and Whitehead. Simon was able to show the new proof to Russell himself who "responded with delight". [3] They attempted to publish the new proof in The Journal of Symbolic Logic , but it was rejected on the grounds that a new proof of an elementary mathematical theorem was not notable, apparently overlooking the fact that one of the authors was a computer program. [14] [3]

Newell and Simon formed a lasting partnership, founding one of the first AI laboratories at the Carnegie Institute of Technology and developing a series of influential artificial intelligence programs and ideas, including the General Problem Solver, Soar, and their unified theory of cognition.

Architecture

This is a brief presentation, based on. [15]

The logical theorist is a program that performs logical processes on logical expressions.

Expressions

For example, the logical expression is represented as a tree with a root element representing . Among the attributes of the root element are pointers to the two elements representing the subexpressions and .

Processes

There are four kinds of processes, from the lowest to the highest level.

Logic Theorist's influence on AI

Logic Theorist introduced several concepts that would be central to AI research:

Reasoning as search
Logic Theorist explored a search tree: the root was the initial hypothesis, each branch was a deduction based on the rules of logic. Somewhere in the tree was the goal: the proposition the program intended to prove. The pathway along the branches that led to the goal was a proof  – a series of statements, each deduced using the rules of logic, that led from the hypothesis to the proposition to be proved.
Heuristics
Newell and Simon realized that the search tree would grow exponentially and that they needed to "trim" some branches, using "rules of thumb" to determine which pathways were unlikely to lead to a solution. They called these ad hoc rules "heuristics", using a term introduced by George Pólya in his classic book on mathematical proof, How to Solve It . (Newell had taken courses from Pólya at Stanford). [16] Heuristics would become an important area of research in artificial intelligence and remains an important method to overcome the intractable combinatorial explosion of exponentially growing searches.
List processing
To implement Logic Theorist on a computer, the three researchers developed a programming language, IPL, which used the same form of symbolic list processing that would later form the basis of McCarthy's Lisp programming language, an important language still used by AI researchers. [17] [18]

Philosophical implications

Pamela McCorduck writes that the Logic Theorist was "proof positive that a machine could perform tasks heretofore considered intelligent, creative and uniquely human". [3] And, as such, it represents a milestone in the development of artificial intelligence and our understanding of intelligence in general.

Simon told a graduate class in January 1956, "Over Christmas, Al Newell and I invented a thinking machine," [19] [20] and would write:

[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind. [21]

This statement, that machines can have minds just as people do, would be later named "Strong AI" by philosopher John Searle. It remains a serious subject of debate up to the present day.

Pamela McCorduck also sees in the Logic Theorist the debut of a new theory of the mind, the information processing model (sometimes called computationalism or cognitivism). She writes that "this view would come to be central to their later work, and in their opinion, as central to understanding mind in the twentieth century as Darwin's principle of natural selection had been to understanding biology in the nineteenth century." [22] Newell and Simon would later formalize this proposal as the physical symbol systems hypothesis.

Notes

  1. Logic theorist is usually considered the first true AI program, although Arthur Samuel's checkers program was released earlier. Christopher Strachey also wrote a checkers program in 1951. [2]
  2. The term "artificial intelligence" was coined by John McCarthy in the proposal for the 1956 Dartmouth Conference. The conference is "generally recognized as the official birthdate of the new science", according to Daniel Crevier. [4]

Citations

  1. 1 2 McCorduck 2004 , pp. 123–125, Crevier 1993 , pp. 44–46 and Russell & Norvig 2021 , p. 17
  2. 1 2 Crevier 1993, p. 44.
  3. 1 2 3 4 McCorduck 2004, p. 167.
  4. Crevier 1993, pp. 49–50.
  5. Crevier 1993, pp. 41–44.
  6. McCorduck 2004, p. 148.
  7. McCorduck 2004, pp. 157–158.
  8. McCorduck 2004, pp. 158–159.
  9. McCorduck 2004, p. 169.
  10. Crevier 1993, p. 45.
  11. McCorduck 2004, p. 124.
  12. Crevier 1993, p. 48.
  13. Crevier 1993, p. 49.
  14. Crevier 1993, p. 146.
  15. Gugerty, Leo (October 2006). "Newell and Simon's Logic Theorist: Historical Background and Impact on Cognitive Modeling". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 50 (9): 880–884. doi:10.1177/154193120605000904. ISSN   2169-5067.
  16. Crevier 1993, p. 43.
  17. Crevier 1993, pp. 46–48.
  18. McCorduck 2004, pp. 167–168.
  19. Quoted in McCorduck (2004 , p. 138)
  20. "Libraries/UnivArchives: Mind Models Exhibit/Problem Solving Research". shelf1.library.cmu.edu.
  21. Quoted in Crevier 1993 , p. 46
  22. McCorduck 2004, p. 127.

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Gödel's ontological proof is a formal argument by the mathematician Kurt Gödel (1906–1978) for the existence of God. The argument is in a line of development that goes back to Anselm of Canterbury (1033–1109). St. Anselm's ontological argument, in its most succinct form, is as follows: "God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist." A more elaborate version was given by Gottfried Leibniz (1646–1716); this is the version that Gödel studied and attempted to clarify with his ontological argument.

Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

Planner is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

<span class="mw-page-title-main">Allen Newell</span> American cognitive scientist

Allen Newell was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theorist (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their contributions to artificial intelligence and the psychology of human cognition.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s.

<span class="mw-page-title-main">Logic in computer science</span> Academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, the GPS works with means–ends analysis.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece, epistemic logic is a much more recent development with applications in many fields, including philosophy, theoretical computer science, artificial intelligence, economics and linguistics. While philosophers since Aristotle have discussed modal logic, and Medieval philosophers such as Avicenna, Ockham, and Duns Scotus developed many of their observations, it was C. I. Lewis who created the first symbolic and systematic approach to the topic, in 1912. It continued to mature as a field, reaching its modern form in 1963 with the work of Kripke.

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

Minimal logic, or minimal calculus, is a symbolic logic system originally developed by Ingebrigt Johansson. It is an intuitionistic and paraconsistent logic, that rejects both the law of the excluded middle as well as the principle of explosion, and therefore holding neither of the following two derivations as valid:

In mathematical logic, minimal axioms for Boolean algebra are assumptions which are equivalent to the axioms of Boolean algebra, chosen to be as short as possible. For example, an axiom with six NAND operations and three variables is equivalent to Boolean algebra:

References