Backward chaining

Last updated

Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. [1]

Contents

In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction . In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess.

Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog. [2]

Usage

Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents. [3] An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule).

For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz:

The goal is to decide whether Fritz is green, based on a rule base containing the following four rules:

An example of backward chaining Backward Chaining Frog Color Example.png
An example of backward chaining
  1. If X croaks and X eats flies – Then X is a frog
  2. If X chirps and X sings – Then X is a canary
  3. If X is a frog – Then X is green
  4. If X is a canary – Then X is yellow

With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green".

1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes:

If Fritz is a frog – Then Fritz is green

Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proven. The antecedent, therefore, becomes the new goal:

 Fritz is a frog

2. Again substituting Fritz for X, rule #1 becomes:

If Fritz croaks and Fritz eats flies – Then Fritz is a frog

Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proven. The antecedent, therefore, becomes the new goal:

 Fritz croaks and Fritz eats flies

3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proven:

 Fritz croaks  Fritz eats flies

4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true:

 Fritz croaks and Fritz eats flies

therefore the antecedent of rule #1 is true and the consequent must be true:

 Fritz is a frog

therefore the antecedent of rule #3 is true and the consequent must be true:

 Fritz is green

This derivation, therefore, allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used.

Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used is modus ponens.

Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.

Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines. [4]

See also

Related Research Articles

In propositional logic, affirming the consequent is a formal fallacy that is committed when, in the context of an indicative conditional statement, it is stated that because the consequent is true, therefore the antecedent is true. It takes on the following form:

In classical logic, disjunctive syllogism is a valid argument form which is a syllogism having a disjunctive statement for one of its premises.

In propositional logic, modus ponens, also known as modus ponendo ponens, implication elimination, or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q.P is true. Therefore, Q must also be true."

In propositional logic, modus tollens (MT), also known as modus tollendo tollens and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.

Deductive reasoning is the process of drawing valid inferences. An inference is valid if its conclusion follows logically from its premises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is sound if it is valid and all its premises are true. One approach defines deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning.

Denying the antecedent is a formal fallacy of inferring the inverse from an original statement. Phrased another way, denying the antecedent occurs in the context of an indicative conditional statement and assumes that the negation of the antecedent implies the negation of the consequent. It is a type of mixed hypothetical syllogism that takes on the following form:

"What the Tortoise Said to Achilles", written by Lewis Carroll in 1895 for the philosophical journal Mind, is a brief allegorical dialogue on the foundations of logic. The title alludes to one of Zeno's paradoxes of motion, in which Achilles could never overtake the tortoise in a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into an infinite regression.

In classical logic, a hypothetical syllogism is a valid argument form, a deductive syllogism with a conditional statement for one or both of its premises. Ancient references point to the works of Theophrastus and Eudemus for the first investigation of this kind of syllogisms.

In natural languages, an indicative conditional is a conditional sentence such as "If Leona is at home, she isn't in Paris", whose grammatical form restricts it to discussing what could be true. Indicatives are typically defined in opposition to counterfactual conditionals, which have extra grammatical marking which allows them to discuss eventualities which are no longer possible.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.

Forward chaining is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.

<span class="mw-page-title-main">Material conditional</span> Logical connective

The material conditional is an operation commonly used in logic. When the conditional symbol is interpreted as material implication, a formula is true unless is true and is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum.

<span class="mw-page-title-main">Wason selection task</span> Test in the study of deductive reasoning

The Wason selection task is a logic puzzle devised by Peter Cathcart Wason in 1966. It is one of the most famous tasks in the study of deductive reasoning. An example of the puzzle is:

You are shown a set of four cards placed on a table, each of which has a number on one side and a color on the other. The visible faces of the cards show 3, 8, blue and red. Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?

A production system is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior, but also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic knowledge representation found useful in automated planning and scheduling, expert systems, and action selection.

In logic, the logical form of a statement is a precisely-specified semantic version of that statement in a formal system. Informally, the logical form attempts to formalize a possibly ambiguous statement into a statement with a precise, unambiguous logical interpretation with respect to a formal system. In an ideal formal language, the meaning of a logical form can be determined unambiguously from syntax alone. Logical forms are semantic, not syntactic constructs; therefore, there may be more than one string that represents the same logical form in a given language.

In logic and mathematics, contraposition, or transposition, refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as § Proof by contrapositive. The contrapositive of a statement has its antecedent and consequent inverted and flipped.

Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables. Modus ponens and modus tollens are the most important rules of inference. A modus ponens rule is in the form

In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

References

  1. Feigenbaum, Edward (1988). The Rise of the Expert Company . Times Books. p.  317. ISBN   0-8129-1731-6.
  2. Michel Chein; Marie-Laure Mugnier (2009). Graph-based knowledge representation: computational foundations of conceptual graphs. Springer. p. 297. ISBN   978-1-84800-285-2.
  3. Definition of backward chaining as a depth-first search method:
  4. Languages that support backward chaining:

Sources