Backward chaining

Last updated

Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. [1]

Contents

In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction . In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess.

Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog. [2]

How it works

Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents. [3] An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule).

For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz:

The goal is to decide whether Fritz is green, based on a rule base containing the following four rules:

An example of backward chaining Backward Chaining Frog Color Example.png
An example of backward chaining
  1. If X croaks and X eats flies – Then X is a frog
  2. If X chirps and X sings – Then X is a canary
  3. If X is a frog – Then X is green
  4. If X is a canary – Then X is yellow

With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green".

1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes:

If Fritz is a frog – Then Fritz is green

Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proven. The antecedent, therefore, becomes the new goal:

 Fritz is a frog

2. Again substituting Fritz for X, rule #1 becomes:

If Fritz croaks and Fritz eats flies – Then Fritz is a frog

Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proven. The antecedent, therefore, becomes the new goal:

 Fritz croaks and Fritz eats flies

3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proven:

 Fritz croaks  Fritz eats flies

4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true:

 Fritz croaks and Fritz eats flies

therefore the antecedent of rule #1 is true and the consequent must be true:

 Fritz is a frog

therefore the antecedent of rule #3 is true and the consequent must be true:

 Fritz is green

This derivation, therefore, allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used.

Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used is modus ponens.

Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.

Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines. [4]

See also

Related Research Articles

In propositional logic, affirming the consequent, sometimes called converse error, fallacy of the converse, or confusion of necessity and sufficiency, is a formal fallacy of taking a true conditional statement under certain assumptions, and invalidly inferring its converse, even though that statement may not be true under the same assumptions. This arises when the consequent has other possible antecedents.

<span class="mw-page-title-main">Expert system</span> Computer system emulating the decision-making ability of a human expert

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

In propositional logic, modus ponens, also known as modus ponendo ponens, implication elimination, or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q.P is true. Therefore, Q must also be true."

In propositional logic, modus tollens (MT), also known as modus tollendo tollens and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.

Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. it is impossible for the premises to be true and the conclusion to be false.

Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from an original statement. It is a type of mixed hypothetical syllogism in the form:

"What the Tortoise Said to Achilles", written by Lewis Carroll in 1895 for the philosophical journal Mind, is a brief allegorical dialogue on the foundations of logic. The title alludes to one of Zeno's paradoxes of motion, in which Achilles could never overtake the tortoise in a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into an infinite regression.

In classical logic, a hypothetical syllogism is a valid argument form, a deductive syllogism with a conditional statement for one or both of its premises. Ancient references point to the works of Theophrastus and Eudemus for the first investigation of this kind of syllogisms.

In natural languages, an indicative conditional is a conditional sentence such as "If Leona is at home, she isn't in Paris", whose grammatical form restricts it to discussing what could be true. Indicatives are typically defined in opposition to counterfactual conditionals, which have extra grammatical marking which allows them to discuss eventualities which are no longer possible.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

In the field of artificial intelligence, an inference engine is a component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.

Forward chaining is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.

<span class="mw-page-title-main">Material conditional</span> Logical connective

The material conditional is an operation commonly used in logic. When the conditional symbol is interpreted as material implication, a formula is true unless is true and is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum.

<span class="mw-page-title-main">Wason selection task</span> Test in the study of deductive reasoning

The Wason selection task is a logic puzzle devised by Peter Cathcart Wason in 1966. It is one of the most famous tasks in the study of deductive reasoning. An example of the puzzle is:

You are shown a set of four cards placed on a table, each of which has a number on one side and a color on the other. The visible faces of the cards show 3, 8, blue and red. Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?

In propositional logic, transposition is a valid rule of replacement that permits one to switch the antecedent with the consequent of a conditional statement in a logical proof if they are also both negated. It is the inference from the truth of "A implies B" to the truth of "Not-B implies not-A", and conversely. It is very closely related to the rule of inference modus tollens. It is the rule that

A "production system" is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior, but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection.

<span class="mw-page-title-main">Logical form</span> Form for logical arguments, obtained by abstracting from the subject matter of its content terms

In logic, the logical form of a statement is a precisely-specified semantic version of that statement in a formal system. Informally, the logical form attempts to formalize a possibly ambiguous statement into a statement with a precise, unambiguous logical interpretation with respect to a formal system. In an ideal formal language, the meaning of a logical form can be determined unambiguously from syntax alone. Logical forms are semantic, not syntactic constructs; therefore, there may be more than one string that represents the same logical form in a given language.

In logic and philosophy, a formal fallacy, deductive fallacy, logical fallacy or non sequitur is a pattern of reasoning rendered invalid by a flaw in its logical structure that can neatly be expressed in a standard logic system, for example propositional logic. It is defined as a deductive argument that is invalid. The argument itself could have true premises, but still have a false conclusion. Thus, a formal fallacy is a fallacy where deduction goes wrong, and is no longer a logical process. This may not affect the truth of the conclusion, since validity and truth are separate in formal logic.

In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

References

  1. Feigenbaum, Edward (1988). The Rise of the Expert Company . Times Books. p.  317. ISBN   0-8129-1731-6.
  2. Michel Chein; Marie-Laure Mugnier (2009). Graph-based knowledge representation: computational foundations of conceptual graphs. Springer. p. 297. ISBN   978-1-84800-285-2.
  3. Definition of backward chaining as a depth-first search method:
  4. Languages that support backward chaining:

Sources