**Belief revision** is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.

- Revision and update
- Example
- Contraction, expansion, revision, consolidation, and merging
- The AGM postulates
- Conditions equivalent to the AGM postulates
- Contraction
- The Ramsey test
- Non-monotonic inference relation
- Foundational revision
- Model-based revision and update
- Iterated revision
- Merging
- Social choice theory
- Complexity
- Relevance
- Implementations
- See also
- Notes
- References
- External links

What makes belief revision non-trivial is that several different ways for performing this operation may be possible. For example, if the current knowledge includes the three facts " is true", " is true" and "if and are true then is true", the introduction of the new information " is false" can be done preserving consistency only by removing at least one of the three facts. In this case, there are at least three different ways for performing revision. In general, there may be several different ways for changing knowledge.

Two kinds of changes are usually distinguished:^{ [1] }

- update
- the new information is about the situation at present, while the old beliefs refer to the past; update is the operation of changing the old beliefs to take into account the change;

- revision
- both the old beliefs and the new information refer to the same situation; an inconsistency between the new and old information is explained by the possibility of old information being less reliable than the new one; revision is the process of inserting the new information into the set of old beliefs without generating an inconsistency.

The main assumption of belief revision is that of minimal change: the knowledge before and after the change should be as similar as possible. In the case of update, this principle formalizes the assumption of inertia. In the case of revision, this principle enforces as much information as possible to be preserved by the change.

The following classical example shows that the operations to perform in the two settings of update and revision are not the same. The example is based on two different interpretations of the set of beliefs and the new piece of information :

- update
- in this scenario, two satellites, Unit A and Unit B, orbit around Mars; the satellites are programmed to land while transmitting their status to Earth; and Earth has received a transmission from one of the satellites, communicating that it is still in orbit. However, due to interference, it is not known which satellite sent the signal; subsequently, Earth receives the communication that Unit A has landed. This scenario can be modeled in the following way: two propositional variables and indicate that Unit A and Unit B, respectively, are still in orbit; the initial set of beliefs is (either one of the two satellites is still in orbit) and the new piece of information is (Unit A has landed, and is therefore not in orbit). The only rational result of the update is ; since the initial information that one of the two satellites had not landed yet was possibly coming from the Unit A, the position of the Unit B is not known.

- revision
- the play "Six Characters in Search of an Author" will be performed in one of the two local theatres. This information can be denoted by , where and indicates that the play will be performed at the first or at the second theatre, respectively; a further information that "Jesus Christ Superstar" will be performed at the first theatre indicates that holds. In this case, the obvious conclusion is that "Six Characters in Search of an Author" will be performed at the second but not the first theatre, which is represented in logic by .

This example shows that revising the belief with the new information produces two different results and depending on whether the setting is that of update or revision.

In the setting in which all beliefs refer to the same situation, a distinction between various operations that can be performed is made:

- contraction
- removal of a belief;

- expansion
- addition of a belief without checking consistency;

- revision
- addition of a belief while maintaining consistency;

- extraction
- extracting a consistent set of beliefs and/or epistemic entrenchment ordering;

- consolidation
- restoring consistency of a set of beliefs;

- merging
- fusion of two or more sets of beliefs while maintaining consistency.

Revision and merging differ in that the first operation is done when the new belief to incorporate is considered more reliable than the old ones; therefore, consistency is maintained by removing some of the old beliefs. Merging is a more general operation, in that the priority among the belief sets may or may not be the same.

Revision can be performed by first incorporating the new fact and then restoring consistency via consolidation. This is actually a form of merging rather than revision, as the new information is not always treated as more reliable than the old knowledge.

The AGM postulates (named after their proponents Alchourrón, Gärdenfors, and Makinson) are properties that an operator that performs revision should satisfy in order for that operator to be considered rational. The considered setting is that of revision, that is, different pieces of information referring to the same situation. Three operations are considered: expansion (addition of a belief without a consistency check), revision (addition of a belief while maintaining consistency), and contraction (removal of a belief).

The first six postulates are called "the basic AGM postulates". In the settings considered by Alchourrón, Gärdenfors, and Makinson, the current set of beliefs is represented by a deductively closed set of logical formulae called belief set, the new piece of information is a logical formula , and revision is performed by a binary operator that takes as its operands the current beliefs and the new information and produces as a result a belief set representing the result of the revision. The operator denoted expansion: is the deductive closure of . The AGM postulates for revision are:

- Closure: is a belief set (i.e., a deductively closed set of formulae);
- Success:
- Inclusion:
- Vacuity:
- is inconsistent only if is inconsistent
- Extensionality: (see logical equivalence)

A revision operator that satisfies all eight postulates is the full meet revision, in which is equal to if consistent, and to the deductive closure of otherwise. While satisfying all AGM postulates, this revision operator has been considered to be too conservative, in that no information from the old knowledge base is maintained if the revising formula is inconsistent with it.^{[ citation needed ]}

The AGM postulates are equivalent to several different conditions on the revision operator; in particular, they are equivalent to the revision operator being definable in terms of structures known as selection functions, epistemic entrenchments, systems of spheres, and preference relations. The latter are reflexive, transitive, and total relations over the set of models.

Each revision operator satisfying the AGM postulates is associated to a set of preference relations , one for each possible belief set , such that the models of are exactly the minimal of all models according to . The revision operator and its associated family of orderings are related by the fact that is the set of formulae whose set of models contains all the minimal models of according to . This condition is equivalent to the set of models of being exactly the set of the minimal models of according to the ordering .

A preference ordering represents an order of implausibility among all situations, including those that are conceivable but yet currently considered false. The minimal models according to such an ordering are exactly the models of the knowledge base, which are the models that are currently considered the most likely. All other models are greater than these ones and are indeed considered less plausible. In general, indicates that the situation represented by the model is believed to be more plausible than the situation represented by . As a result, revising by a formula having and as models should select only to be a model of the revised knowledge base, as this model represent the most likely scenario among those supported by .

Contraction is the operation of removing a belief from a knowledge base ; the result of this operation is denoted by . The operators of revision and contractions are related by the Levi and Harper identities:

Eight postulates have been defined for contraction. Whenever a revision operator satisfies the eight postulates for revision, its corresponding contraction operator satisfies the eight postulates for contraction and vice versa. If a contraction operator satisfies at least the first six postulates for contraction, translating it into a revision operator and then back into a contraction operator using the two identities above leads to the original contraction operator. The same holds starting from a revision operator.

One of the postulates for contraction has been longly discussed: the recovery postulate:

According to this postulate, the removal of a belief followed by the reintroduction of the same belief in the belief set should lead to the original belief set. There are some examples showing that such behavior is not always reasonable: in particular, the contraction by a general condition such as leads to the removal of more specific conditions such as from the belief set; it is then unclear why the reintroduction of should also lead to the reintroduction of the more specific condition . For example, if George was previously believed to have German citizenship, he was also believed to be European. Contracting this latter belief amounts to ceasing to believe that George is European; therefore, that George has German citizenship is also retracted from the belief set. If George is later discovered to have Austrian citizenship, then the fact that he is European is also reintroduced. According to the recovery postulate, however, the belief that he also has German citizenship should also be reintroduced.

The correspondence between revision and contraction induced by the Levi and Harper identities is such that a contraction not satisfying the recovery postulate is translated into a revision satisfying all eight postulates, and that a revision satisfying all eight postulates is translated into a contraction satisfying all eight postulates, including recovery. As a result, if recovery is excluded from consideration, a number of contraction operators are translated into a single revision operator, which can be then translated back into exactly one contraction operator. This operator is the only one of the initial group of contraction operators that satisfies recovery; among this group, it is the operator that preserves as much information as possible.

The evaluation of a counterfactual conditional can be done, according to the **Ramsey test** (named for Frank P. Ramsey), to the hypothetical addition of to the set of current beliefs followed by a check for the truth of . If is the set of beliefs currently held, the Ramsey test is formalized by the following correspondence:

- if and only if

If the considered language of the formulae representing beliefs is propositional, the Ramsey test gives a consistent definition for counterfactual conditionals in terms of a belief revision operator. However, if the language of formulae representing beliefs itself includes the counterfactual conditional connective , the Ramsey test leads to the Gärdenfors triviality result: there is no non-trivial revision operator that satisfies both the AGM postulates for revision and the condition of the Ramsey test. This result holds in the assumption that counterfactual formulae like can be present in belief sets and revising formulae. Several solutions to this problem have been proposed.

Given a fixed knowledge base and a revision operator , one can define a non-monotonic inference relation using the following definition: if and only if . In other words, a formula entails another formula if the addition of the first formula to the current knowledge base leads to the derivation of . This inference relation is non-monotonic.

The AGM postulates can be translated into a set of postulates for this inference relation. Each of these postulates is entailed by some previously considered set of postulates for non-monotonic inference relations. Vice versa, conditions that have been considered for non-monotonic inference relations can be translated into postulates for a revision operator. All these postulates are entailed by the AGM postulates.

In the AGM framework, a belief set is represented by a deductively closed set of propositional formulae. While such sets are infinite, they can always be finitely representable. However, working with deductively closed sets of formulae leads to the implicit assumption that equivalent belief sets should be considered equal when revising. This is called the *principle of irrelevance of syntax*.

This principle has been and is currently debated: while and are two equivalent sets, revising by should produce different results. In the first case, and are two separate beliefs; therefore, revising by should not produce any effect on , and the result of revision is . In the second case, is taken a single belief. The fact that is false contradicts this belief, which should therefore be removed from the belief set. The result of revision is therefore in this case.

The problem of using deductively closed knowledge bases is that no distinction is made between pieces of knowledge that are known by themselves and pieces of knowledge that are merely consequences of them. This distinction is instead done by the *foundational* approach to belief revision, which is related to foundationalism in philosophy. According to this approach, retracting a non-derived piece of knowledge should lead to retracting all its consequences that are not otherwise supported (by other non-derived pieces of knowledge). This approach can be realized by using knowledge bases that are not deductively closed and assuming that all formulae in the knowledge base represent self-standing beliefs, that is, they are not derived beliefs. In order to distinguish the foundational approach to belief revision to that based on deductively closed knowledge bases, the latter is called the *coherentist* approach. This name has been chosen because the coherentist approach aims at restoring the coherence (consistency) among *all* beliefs, both self-standing and derived ones. This approach is related to coherentism in philosophy.

Foundationalist revision operators working on non-deductively closed belief sets typically select some subsets of that are consistent with , combined them in some way, and then conjoined them with . The following are two non-deductively closed base revision operators.

- WIDTIO
- (When in Doubt, Throw it Out) the maximal subsets of that are consistent with are intersected, and is added to the resulting set; in other words, the result of revision is composed by and of all formulae of that are in all maximal subsets of that are consistent with ;

- Williams
- solved an open problem by developing a new representation for finite bases that allowed for AGM revision and contraction operations to be performed.
^{ [2] }This representation was translated to a computational model and an anytime algorithm for belief revision was developed.^{ [3] }

- Ginsberg–Fagin–Ullman–Vardi
- the maximal subsets of that are consistent and contain are combined by disjunction;

- Nebel
- similar to the above, but a priority among formulae can be given, so that formulae with higher priority are less likely to being retracted than formulae with lower priority.

A different realization of the foundational approach to belief revision is based on explicitly declaring the dependences among beliefs. In the truth maintenance systems, dependence links among beliefs can be specified. In other words, one can explicitly declare that a given fact is believed because of one or more other facts; such a dependency is called a *justification*. Beliefs not having any justifications play the role of non-derived beliefs in the non-deductively closed knowledge base approach.

A number of proposals for revision and update based on the set of models of the involved formulae were developed independently of the AGM framework. The principle behind this approach is that a knowledge base is equivalent to a set of *possible worlds*, that is, to a set of scenarios that are considered possible according to that knowledge base. Revision can therefore be performed on the sets of possible worlds rather than on the corresponding knowledge bases.

The revision and update operators based on models are usually identified by the name of their authors: Winslett, Forbus, Satoh, Dalal, Hegner, and Weber. According to the first four of these proposal, the result of revising/updating a formula by another formula is characterized by the set of models of that are the closest to the models of . Different notions of closeness can be defined, leading to the difference among these proposals.

- Peppas and Williams
- provided the formal relationship between revision and update. They introduced the Winslett Identity in
^{ [1] }

- Dalal
- the models of having a minimal Hamming distance to models of are selected to be the models that result from the change;

- Satoh
- similar to Dalal, but distance between two models is defined as the set of literals that are given different values by them; similarity between models is defined as set containment of these differences;

- Winslett
- for each model of , the closest models of are selected; comparison is done using set containment of the difference;

- Borgida
- equal to Winslett's if and are inconsistent; otherwise, the result of revision is ;

- Forbus
- similar to Winslett, but the Hamming distance is used.

The revision operator defined by Hegner makes not to affect the value of the variables that are mentioned in . What results from this operation is a formula that is consistent with , and can therefore be conjoined with it. The revision operator by Weber is similar, but the literals that are removed from are not all literals of , but only the literals that are evaluated differently by a pair of closest models of and according to the Satoh measure of closeness.

The AGM postulates are equivalent to a preference ordering (an ordering over models) to be associated to every knowledge base . However, they do not relate the orderings corresponding to two non-equivalent knowledge bases. In particular, the orderings associated to a knowledge base and its revised version can be completely different. This is a problem for performing a second revision, as the ordering associated with is necessary to calculate .

Establishing a relation between the ordering associated with and has been however recognized not to be the right solution to this problem. Indeed, the preference relation should depend on the previous history of revisions, rather than on the resulting knowledge base only. More generally, a preference relation gives more information about the state of mind of an agent than a simple knowledge base. Indeed, two states of mind might represent the same piece of knowledge while at the same time being different in the way a new piece of knowledge would be incorporated. For example, two people might have the same idea as to where to go on holiday, but they differ on how they would change this idea if they win a million-dollar lottery. Since the basic condition of the preference ordering is that their minimal models are exactly the models of their associated knowledge base, a knowledge base can be considered implicitly represented by a preference ordering (but not vice versa).

Given that a preference ordering allows deriving its associated knowledge base but also allows performing a single step of revision, studies on iterated revision have been concentrated on how a preference ordering should be changed in response of a revision. While single-step revision is about how a knowledge base has to be changed into a new knowledge base , iterated revision is about how a preference ordering (representing both the current knowledge and how much situations believed to be false are considered possible) should be turned into a new preference relation when is learned. A single step of iterated revision produces a new ordering that allows for further revisions.

Two kinds of preference ordering are usually considered: numerical and non-numerical. In the first case, the level of plausibility of a model is representing by a non-negative integer number; the lower the rank, the more plausible the situation corresponding to the model. Non-numerical preference orderings correspond to the preference relations used in the AGM framework: a possibly total ordering over models. The non-numerical preference relation were initially considered unsuitable for iterated revision because of the impossibility of reverting a revision by a number of other revisions, which is instead possible in the numerical case.

Darwiche and Pearl ^{ [4] } formulated the following postulates for iterated revision.

- if then ;
- if , then ;
- if , then ;
- if , then .

Specific iterated revision operators have been proposed by Spohn, Boutilier, Williams, Lehmann, and others. Williams also provided a general iterated revision operator.

- Spohn rejected revision
- this non-numerical proposal has been first considered by Spohn, who rejected it based on the fact that revisions can change some orderings in such a way the original ordering cannot be restored with a sequence of other revisions; this operator change a preference ordering in view of new information by making all models of being preferred over all other models; the original preference ordering is maintained when comparing two models that are both models of or both non-models of ;

- Natural revision
- while revising a preference ordering by a formula , all minimal models (according to the preference ordering) of are made more preferred by all other ones; the original ordering of models is preserved when comparing two models that are not minimal models of ; this operator changes the ordering among models minimally while preserving the property that the models of the knowledge base after revising by are the minimal models of according to the preference ordering;

- Transmutations
- Williams provided the first generalization of belief revision iteration using transmutations. She illustrated transmutations using two forms of revision, conditionalization and adjustment, which work on numerical preference orderings; revision requires not only a formula but also a number or ranking of an existing belief indicating its degree of plausibility; while the preference ordering is still inverted (the lower a model, the most plausible it is) the degree of plausibility of a revising formula is direct (the higher the degree, the most believed the formula is);

- Ranked revision
- a ranked model, which is an assignment of non-negative integers to models, has to be specified at the beginning; this rank is similar to a preference ordering, but is not changed by revision; what is changed by a sequence of revisions are a current set of models (representing the current knowledge base) and a number called the rank of the sequence; since this number can only monotonically non-decrease, some sequences of revision lead to situations in which every further revision is performed as a full meet revision.

The assumption implicit in the revision operator is that the new piece of information is always to be considered more reliable than the old knowledge base . This is formalized by the second of the AGM postulates: is always believed after revising with . More generally, one can consider the process of merging several pieces of information (rather than just two) that might or might not have the same reliability. Revision becomes the particular instance of this process when a less reliable piece of information is merged with a more reliable .

While the input to the revision process is a pair of formulae and , the input to merging is a multiset of formulae , , etc. The use of multisets is necessary as two sources to the merging process might be identical.

When merging a number of knowledge bases with the same degree of plausibility, a distinction is made between arbitration and majority. This distinction depends on the assumption that is made about the information and how it has to be put together.

- Arbitration
- the result of arbitrating two knowledge bases and entails ; this condition formalizes the assumption of maintaining as much as the old information as possible, as it is equivalent to imposing that every formula entailed by both knowledge bases is also entailed by the result of their arbitration; in a possible world view, the "real" world is assumed one of the worlds considered possible according to at least one of the two knowledge bases;

- Majority
- the result of merging a knowledge base with other knowledge bases can be forced to entail by adding a sufficient number of other knowledge bases equivalent to ; this condition corresponds to a kind of vote-by-majority: a sufficiently large number of knowledge bases can always overcome the "opinion" of any other fixed set of knowledge bases.

The above is the original definition of arbitration. According to a newer definition, an arbitration operator is a merging operator that is insensitive to the number of equivalent knowledge bases to merge. This definition makes arbitration the exact opposite of majority.

Postulates for both arbitration and merging have been proposed. An example of an arbitration operator satisfying all postulates is the classical disjunction. An example of a majority operator satisfying all postulates is that selecting all models that have a minimal total Hamming distance to models of the knowledge bases to merge.

A merging operator can be expressed as a family of orderings over models, one for each possible multiset of knowledge bases to merge: the models of the result of merging a multiset of knowledge bases are the minimal models of the ordering associated to the multiset. A merging operator defined in this way satisfies the postulates for merging if and only if the family of orderings meets a given set of conditions. For the old definition of arbitration, the orderings are not on models but on pairs (or, in general, tuples) of models.

Many revision proposals involve orderings over models representing the relative plausibility of the possible alternatives. The problem of merging amounts to combine a set of orderings into a single one expressing the combined plausibility of the alternatives. This is similar with what is done in social choice theory, which is the study of how the preferences of a group of agents can be combined in a rational way. Belief revision and social choice theory are similar in that they combine a set of orderings into one. They differ on how these orderings are interpreted: preferences in social choice theory; plausibility in belief revision. Another difference is that the alternatives are explicitly enumerated in social choice theory, while they are the propositional models over a given alphabet in belief revision.

From the point of view of computational complexity, the most studied problem about belief revision is that of query answering in the propositional case. This is the problem of establishing whether a formula follows from the result of a revision, that is, , where , , and are propositional formulae. More generally, query answering is the problem of telling whether a formula is entailed by the result of a belief revision, which could be update, merging, revision, iterated revision, etc. Another problem that has received some attention is that of model checking, that is, checking whether a model satisfies the result of a belief revision. A related question is whether such result can be represented in space polynomial in that of its arguments.

Since a deductively closed knowledge base is infinite, complexity studies on belief revision operators working on deductively closed knowledge bases are done in the assumption that such deductively closed knowledge base are given in the form of an equivalent finite knowledge base.

A distinction is made among belief revision operators and belief revision schemes. While the former are simple mathematical operators mapping a pair of formulae into another formula, the latter depend on further information such as a preference relation. For example, the Dalal revision is an operator because, once two formulae and are given, no other information is needed to compute . On the other hand, revision based on a preference relation is a revision scheme, because and do not allow determining the result of revision if the family of preference orderings between models is not given. The complexity for revision schemes is determined in the assumption that the extra information needed to compute revision is given in some compact form. For example, a preference relation can be represented by a sequence of formulae whose models are increasingly preferred. Explicitly storing the relation as a set of pairs of models is instead not a compact representation of preference because the space required is exponential in the number of propositional letters.

The complexity of query answering and model checking in the propositional case is in the second level of the polynomial hierarchy for most belief revision operators and schemas. Most revision operators suffer from the problem of representational blow up: the result of revising two formulae is not necessarily representable in space polynomial in that of the two original formulae. In other words, revision may exponentially increase the size of the knowledge base.

New breakthrough results that demonstrate how relevance can be employed in belief revision have been achieved. Williams, Peppas, Foo and Chopra reported the results in the *Artificial Intelligence* journal.^{ [5] }

Belief revision has also been used to demonstrate the acknowledgement of intrinsic social capital in closed networks.^{ [6] }

Systems specifically implementing belief revision are:

- SATEN – an object-oriented web-based revision and extraction engine (Williams, Sims)
^{ [7] } - ADS – SAT solver–based belief revision (Benferhat, Kaci, Le Berre, Williams)
^{ [8] } - BReLS
^{ [9] } - Immortal
^{ [10] }

Two systems including a belief revision feature are SNePS^{ [11] } and Cyc.

- 1 2 Peppas, Pavlos; Williams, Mary-Anne (1995). "Constructive Modelings for Theory Change".
*Notre Dame Journal of Formal Logic*.**36**: 120–133. doi: 10.1305/ndjfl/1040308831 . MR 1359110. Zbl 0844.03017. - ↑
*On the Logic of Theory Base Change Proceeding JELIA '94 Proceedings of the European Conference on Logics in Artificial Intelligence Pages 86-105*. Jelia '94. ACM Digital Library. 5 September 1994. pp. 86–105. ISBN 9783540583325 . Retrieved November 18, 2017. - ↑ "Anytime Belief Revision IJCAI'97 Proceedings of the 15th international joint conference on Artificial intelligence - Volume 1 Pages 74-79" (PDF). ijcai.org. Retrieved November 18, 2017.
- ↑ Darwiche, Adnan; Pearl, Judea (1997). "On the logic of iterated belief revision".
*Artificial Intelligence*.**89**(1–2): 1–29. doi: 10.1016/S0004-3702(96)00038-0 . - ↑ Peppas, Pavlos; Williams, Mary-Anne; Chopra, Samir; Foo, Norman (2015). "Relevance in belief revision".
*Artificial Intelligence*.**229**: 126–138. doi: 10.1016/j.artint.2015.08.007 . - ↑ Koley, Gaurav; Deshmukh, Jayati; Srinivasa, Srinath (2020). Aref, Samin; Bontcheva, Kalina; Braghieri, Marco; Dignum, Frank; Giannotti, Fosca; Grisolia, Francesco; Pedreschi, Dino (eds.). "Social Capital as Engagement and Belief Revision".
*Social Informatics*. Lecture Notes in Computer Science. Cham: Springer International Publishing.**12467**: 137–151. doi:10.1007/978-3-030-60975-7_11. ISBN 978-3-030-60975-7. S2CID 222233101. - ↑ Williams, Mary-Anne; Sims, Aidan (2000). "SATEN: An Object-Oriented Web-Based Revision and Extraction Engine". arXiv: cs/0003059 .
- ↑ Benferhat, Salem; Kaci, Souhila; Le Berre, Daniel; Williams, Mary-Anne (2004). "Weakening conflicting information for iterated revision and knowledge integration".
*Artificial Intelligence*.**153**(1–2): 339–371. doi: 10.1016/j.artint.2003.08.003 . - ↑ Liberatore, Paolo; Schaerf, Marco (April 2000). "BReLS: a system for the integration of knowledge bases".
*KR'00: Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning*. KR. Breckenridge, Colorado, USA: Morgan Kaufmann Publishers. pp. 145–152. - ↑ Chou, Timothy S. C.; Winslett, Marianne (June 1991). "The implementation of a model-based belief revision system".
*ACM SIGART Bulletin*.**2**(3): 28–34. doi:10.1145/122296.122301. S2CID 18021282. - ↑ Martins, João P.; Shapiro, Stuart C. (May 1988). "A model for belief revision".
*Artificial Intelligence*.**35**(1): 25–79. doi:10.1016/0004-3702(88)90031-8.

An **axiom**, **postulate**, or **assumption** is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word ἀξίωμα (*axíōma*), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.

In artificial intelligence, the **frame problem** describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a FOL system, additional axioms are required to make inferences about the environment. The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment.

**Inductive logic programming** (**ILP**) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.

**Abductive reasoning** is a form of logical inference formulated and advanced by American philosopher Charles Sanders Peirce beginning in the last third of the 19th century. It starts with an observation or set of observations and then seeks the simplest and most likely conclusion from the observations. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it. Abductive conclusions are thus qualified as having a remnant of uncertainty or doubt, which is expressed in retreat terms such as "best available" or "most likely". One can understand abductive reasoning as **inference to the best explanation**, although not all usages of the terms *abduction* and *inference to the best explanation* are exactly equivalent.

**Deductive reasoning** is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. if it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is *sound* if it is *valid* and all its premises are true. Some theorists define deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning.

In traditional logic, a **contradiction** occurs when a proposition conflicts either with itself or established fact. It is often used as a tool to detect disingenuous beliefs and bias. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."

**Description logics** (**DL**) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance between expressive power and reasoning complexity by supporting different sets of mathematical constructors.

The **theory of belief functions**, also referred to as **evidence theory** or **Dempster–Shafer theory** (**DST**), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence.

A **non-monotonic logic** is a formal logic whose conclusion relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence. Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to a theory never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. A monotonic logic cannot handle various reasoning tasks such as reasoning by default, abductive reasoning, some important approaches to reasoning about knowledge, and similarly, belief revision.

**Default logic** is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.

In proof theory, the **semantic tableau** is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics.

The **situation calculus** is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation.

**Answer set programming** (**ASP**) is a form of declarative programming oriented towards difficult search problems. It is based on the stable model semantics of logic programming. In ASP, search problems are reduced to computing stable models, and *answer set solvers*—programs for generating stable models—are used to perform search. The computational process employed in the design of many answer set solvers is an enhancement of the DPLL algorithm and, in principle, it always terminates.

The **closed-world assumption** (CWA), in a formal system of logic used for knowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to a logical formalization of this assumption by Raymond Reiter. The opposite of the closed-world assumption is the open-world assumption (OWA), stating that lack of knowledge does not imply falsity. Decisions on CWA vs. OWA determine the understanding of the actual semantics of a conceptual expression with the same notations of concepts. A successful formalization of natural language semantics usually cannot avoid an explicit revelation of whether the implicit logical backgrounds are based on CWA or OWA.

In philosophical logic, **defeasible reasoning** is a kind of reasoning that is rationally compelling, though not deductively valid. It usually occurs when a rule is given, but there may be specific exceptions to the rule, or subclasses that are subject to a different rule. Defeasibility is found in literatures that are concerned with argument and the process of argument, or heuristic reasoning.

**Circumscription** is a non-monotonic logic created by John McCarthy to formalize the common sense assumption that things are as expected unless otherwise specified. Circumscription was later used by McCarthy in an attempt to solve the frame problem. To implement circumscription in its initial formulation, McCarthy augmented first-order logic to allow the minimization of the extension of some predicates, where the extension of a predicate is the set of tuples of values the predicate is true on. This minimization is similar to the closed-world assumption that what is not known to be true is false.

The **autoepistemic logic** is a formal logic for the representation and reasoning of knowledge about knowledge. While propositional logic can only express facts, autoepistemic logic can express knowledge and lack of knowledge about facts.

**Epistemic modal logic** is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece, epistemic logic is a much more recent development with applications in many fields, including philosophy, theoretical computer science, artificial intelligence, economics and linguistics. While philosophers since Aristotle have discussed modal logic, and Medieval philosophers such as Avicenna, Ockham, and Duns Scotus developed many of their observations, it was C. I. Lewis who created the first symbolic and systematic approach to the topic, in 1912. It continued to mature as a field, reaching its modern form in 1963 with the work of Kripke.

In artificial intelligence and related fields, an **argumentation framework** is a way to deal with contentious information and draw conclusions from it using formalized arguments.

**Dynamic epistemic logic** (DEL) is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur. These events can change factual properties of the actual world : for example a red card is painted in blue. They can also bring about changes of knowledge without changing factual properties of the world : for example a card is revealed publicly to be red. Originally, DEL focused on epistemic events. We only present in this entry some of the basic ideas of the original DEL framework; more details about DEL in general can be found in the references.

- C. E. Alchourròn, P. Gärdenfors, and D. Makinson (1985). On the logic of theory change: Partial meet contraction and revision functions.
*Journal of Symbolic Logic*, 50:510–530. - Antoniou, G. and M-A. Williams (1997) Nonmontonic Reasoning, MIT Press.
- Antoniou, G. and M-A. Williams (1995) Reasoning with Incomplete and Changing Information, in the Proceedings of the International Joint Conference on Information Sciences, 568-572.
- T. Aravanis, P. Peppas, and M-A Williams, (2017) Epistemic-entrenchment Characterization of Parikh's Axiom, in International Joint Conf on Artificial Intelligence IJCAI-17, p772-778.
- S. Benferhat, D. Dubois, H. Prade, and M-A Williams (2002). A Practical Approach to Fusing Prioritized Knowledge Bases, Studia Logica: International Journal for Symbolic Logic, 70(1): 105-130.
- S. Benferhat, S. Kaci, D. Le Berre, M-A Williams (2004) Weakening Conflicting Information for Iterated Revision & Knowledge Integration, Artificial Intelligence Journal, Volume 153,1-2, 339-371.
- C. Boutilier (1993). Revision sequences and nested conditionals. In
*Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI'93)*, pages 519–525. - C. Boutilier (1995). Generalized update: belief change in dynamic settings. In
*Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95)*, pages 1550–1556. - C. Boutilier (1996). Abduction to plausible causes: an event-based model of belief update.
*Artificial Intelligence*, 83:143–166. - M. Cadoli, F. M. Donini, P. Liberatore, and M. Schaerf (1999). The size of a revised knowledge base.
*Artificial Intelligence*, 115(1):25–64. - T. Chou and M. Winslett (1991). Immortal: A model-based belief revision system. In
*Proceedings of the Second International Conference on the Principles of Knowledge Representation and Reasoning (KR'91)*, pages 99–110. Morgan Kaufmann Publishers. - M. Dalal (1988). Investigations into a theory of knowledge base revision: Preliminary report. In
*Proceedings of the Seventh National Conference on Artificial Intelligence (AAAI'88)*, pages 475–479. - T. Eiter and G. Gottlob (1992). On the complexity of propositional knowledge base revision, updates and counterfactuals.
*Artificial Intelligence*, 57:227–270. - T. Eiter and G. Gottlob (1996). The complexity of nested counterfactuals and iterated knowledge base revisions.
*Journal of Computer and System Sciences*, 53(3):497–512. - R. Fagin, J. D. Ullman, and M. Y. Vardi (1983). On the semantics of updates in databases. In
*Proceedings of the Second ACM SIGACT SIGMOD Symposium on Principles of Database Systems (PODS'83)*, pages 352–365. - M. A. Falappa, G. Kern-Isberner, G. R. Simari (2002): Explanations, belief revision and defeasible reasoning.
*Artificial Intelligence*, 141(1–2): 1–28. - M. Freund and D. Lehmann (2002). Belief Revision and Rational Inference. Arxiv preprint cs.AI/0204032.
- N. Friedman and J. Y. Halpern (1994). A knowledge-based framework for belief change, part II: Revision and update. In
*Proceedings of the Fourth International Conference on the Principles of Knowledge Representation and Reasoning (KR'94)*, pages 190–200. - A. Fuhrmann (1991). Theory contraction through base contraction.
*Journal of Philosophical Logic*, 20:175–203. - D. Gabbay, G. Pigozzi, and J. Woods (2003). Controlled Revision – An algorithmic approach for belief revision,
*Journal of Logic and Computation*, 13(1): 15–35. - P. Gärdenfors and Williams (2001). Reasoning about Categories in Conceptual Spaces, in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 385–392.
- P. Gärdenfors and D. Makinson (1988). Revision of knowledge systems using epistemic entrenchment. In
*Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge (TARK'88)*, pages 83–95. - P. Gärdenfors and H. Rott (1995). Belief revision. In
*Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 4*, pages 35–132. Oxford University Press. - G. Grahne and Alberto O. Mendelzon (1995). Updates and subjunctive queries.
*Information and Computation*, 2(116):241–252. - G. Grahne, Alberto O. Mendelzon, and P. Revesz (1992). Knowledge transformations. In
*Proceedings of the Eleventh ACM SIGACT SIGMOD SIGART Symposium on Principles of Database Systems (PODS'92)*, pages 246–260. - S. O. Hansson (1999).
*A Textbook of Belief Dynamics*. Dordrecht: Kluwer Academic Publishers. - A. Herzig (1996). The PMA revised. In
*Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR'96)*, pages 40–50. - A. Herzig (1998). Logics for belief base updating. In D. Dubois, D. Gabbay, H. Prade, and P. Smets, editors,
*Handbook of defeasible reasoning and uncertainty management*, volume 3 – Belief Change, pages 189–231. Kluwer Academic Publishers. - A. Karol and M-A Williams (2005). Understanding Human Strategies for Belief Revision: Conference on Theoretical Aspects of Rationality & Knowledge (TARK) Halpern, J. & VanderMeyden (eds).
- H. Katsuno and A. O. Mendelzon (1991). On the difference between updating a knowledge base and revising it. In
*Proceedings of the Second International Conference on the Principles of Knowledge Representation and Reasoning (KR'91)*, pages 387–394. - H. Katsuno and A. O. Mendelzon (1991). Propositional knowledge base revision and minimal change.
*Artificial Intelligence*, 52:263–294. - S. Konieczny and R. Pino Perez (1998). On the logic of merging. In
*Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR'98)*, pages 488–498. - D. Lehmann (1995). Belief revision, revised. In
*Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95)*, pages 1534–1540. - P. Liberatore (1997). The complexity of iterated belief revision. In
*Proceedings of the Sixth International Conference on Database Theory (ICDT'97)*, pages 276–290. - P. Liberatore and M. Schaerf (1998). Arbitration (or how to merge knowledge bases).
*IEEE Transactions on Knowledge and Data Engineering*, 10(1):76–90. - P. Liberatore and M. Schaerf (2000). BReLS: A system for the integration of knowledge bases. In
*Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2000)*, pages 145–152. - W. Liu, and M-A Williams (2001). A Framework for Multi-Agent Belief Revision, Studia Logica: An International Journal, vol. 67(2), 219 - 312.
- W. Liu and Williams (2002). Trustworthiness of Information Sources and Information Pedigree Intelligent Agents VIII, Series: Lecture Notes in Computer Science. Volume 2333: 290–306.
- W. Liu and Williams (1999) A Framework for Multi-Agent Belief Revision, Part I: The Role of Ontology, LNAI No. 1747, Advanced Topics in Artificial Intelligence, Springer Verlag, 168–180.
- D. Makinson (1985). How to give up: A survey of some formal aspects of the logic of theory change.
*Synthese*, 62:347–363. - MacNish, K. and M-A. Williams (1998). From Belief Revision to Design Revision: Applying Theory Change to Changing Requirements, LNAI, Springer Verlag, 207-222.
- B. Nebel (1991). Belief revision and default reasoning: Syntax-based approaches. In
*Proceedings of the Second International Conference on the Principles of Knowledge Representation and Reasoning (KR'91)*, pages 417–428. - B. Nebel (1994). Base revision operations and schemes: Semantics, representation and complexity. In
*Proceedings of the Eleventh European Conference on Artificial Intelligence (ECAI'94)*, pages 341–345. - B. Nebel (1996). How hard is it to revise a knowledge base? Technical Report 83, Albert-Ludwigs-Universität Freiburg, Institut für Informatik.
- P. Peppas and M-A Williams (1995). Constructive Modellings for Theory Change, Notre Dame Journal of Formal Logic, a special issue on Belief Revision, Kluwer, Vol 36, No 1, 120 - 133.
- P. Peppas, P., M-A Williams, Chopra, S., & Foo, N. (2015). Relevance in Belief Revision. Artificial Intelligence, 229, 126-138.
- P. Peppas, M-A Williams (2016). Kinetic consistency and relevance in belief revision. European Conference on Logics in Artificial Intelligence (JELIA), LNCS pp. 401–414.
- P. Peppas and Williams (2014). Belief Change and Semiorders. In T. Eiter, C. Baral, & G. De Giacomo (Eds.), http://www.aaai.org/Press/Proceedings/kr14.php. Menlo Park USA: AAAI.
- A. Perea (2003).
*Proper Rationalizability and Belief Revision in Dynamic Games*. Research Memoranda 048: METEOR, Maastricht Research School of Economics of Technology and Organization. - G. Pigozzi (2005). Two aggregation paradoxes in social decision making: the Ostrogorski paradox and the discursive dilemma,
*Episteme: A Journal of Social Epistemology*, 2(2): 33–42. - G. Pigozzi (2006). Belief merging and the discursive dilemma: an argument-based account to paradoxes of judgment aggregation.
*Synthese*152(2): 285–298. - P. Z. Revesz (1993). On the semantics of theory change: Arbitration between old and new information. In
*Proceedings of the Twelfth ACM SIGACT SIGMOD SIGART Symposium on Principles of Database Systems (PODS'93)*, pages 71–82. - K. Satoh (1988). Nonmonotonic reasoning by minimal belief revision. In
*Proceedings of the International Conference on Fifth Generation Computer Systems (FGCS'88)*, pages 455–462. - Shoham, Yoav; Leyton-Brown, Kevin (2009).
*Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations*. New York: Cambridge University Press. ISBN 978-0-521-89943-7. See Section 14.2; downloadable free online. - V. S. Subrahmanian (1994). Amalgamating knowledge bases.
*ACM Transactions on Database Systems*, 19(2):291–331. - A. Weber (1986). Updating propositional formulas. In
*Proc. of First Conf. on Expert Database Systems*, pages 487–500. - M-A Williams and Hans Rott (2001). Frontiers in Belief Revision, Kluwer.
- M-A. Williams (1994). Transmutations of knowledge systems. In
*Proceedings of the Fourth International Conference on the Principles of Knowledge Representation and Reasoning (KR'94)*, pages 619–629. - M-A. Williams and A. Sims (2000). SATEN: An Object-Oriented Web-based Revision and Extraction Engine, in Proceedings of the 8th International Workshop on Nonmontonic Reasoning, Baral, C. and Truszczynski, M. (eds), Automated e-Print Archives at https://arxiv.org/abs/cs.AI/0003059
- M-A. Williams (1997). Belief Revision via Database Update, in the Proceedings of the International Intelligent Information Systems Conference, 410-415.
- M-A. Williams (1997). Anytime Revision, in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Morgan Kaufmann, San Francisco, 74-80.
- M-A. Williams (1996). Towards a Practical Approach to Belief Revision: Reason-Based Change, Proc International Conf on Principles of Knowledge Representation and Reasoning KR'96, Morgan Kaufmann, 412-421.
- M-A. Williams (1996) A Commonsense Approach to Belief Revision, in the Proceedings of the Third International Symposium on Common Sense, 1996, Stanford University, 245-262.
- M-A. Williams (1995) Changing Nonmonotonic Inference Relations, in the Proceedings of the Second World Conference on the Foundations of Artificial Intelligence, 469-482.
- M-A. Williams (1995) Iterated Theory Base Revision: A Computational Model, in the Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI), Morgan Kaufmann, 1541-1550.
- M-A. Williams, Pagnucco, M., Foo, N. and Sims, B. (1995) Determining Explanations using Knowledge Transmutations, Proc 14th Int. Joint Conference on Artificial Intelligence (IJCAI), Morgan Kauffman 822-830.
- M-A. Williams (1994). On the Logic of Theory Base Change, in C. MacNish, D. Pearce, L.Perria (eds), Logics in Artificial Intelligence, Lecture Note Series in Computer Science, No 838, Springer-Verlag, 86-105.
- M-A. Williams (1994). Explanation and Theory Base Transmutations, in the Proceedings of the European Conference on Artificial Intelligence (ECAI), Wiley, London, 341-346.
- M-A. Williams and Foo, N.Y. (1990) Nonmonotonic Dynamics of Default Logic, in the Proceedings of the European Conference on Artificial Intelligence (ECAI), Wiley, London, 702-707.
- M. Winslett (1989). Sometimes updates are circumscription. In
*Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI'89)*, pages 859–863. - M. Winslett (1990).
*Updating Logical Databases*. Cambridge University Press. - Y. Zhang and N. Foo (1996). Updating knowledge bases with disjunctive information. In
*Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96)*, pages 562–568.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.