Completeness (logic)

Last updated

In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true.

Contents

The property converse to completeness is called soundness: a system is sound with respect to a property (mostly semantical validity) if each of its theorems has that property.

Forms of completeness

Expressive completeness

A formal language is expressively complete if it can express the subject matter for which it is intended.

Functional completeness

A set of logical connectives associated with a formal system is functionally complete if it can express all propositional functions.

Semantic completeness

Semantic completeness is the converse of soundness for formal systems. A formal system is complete with respect to tautologousness or "semantically complete" when all its tautologies are theorems, whereas a formal system is "sound" when all theorems are tautologies (that is, they are semantically valid formulas: formulas that are true under every interpretation of the language of the system that is consistent with the rules of the system). That is, a formal system is semantically complete if

[1]

For example, Gödel's completeness theorem establishes semantic completeness for first-order logic.

Strong completeness

A formal system S is strongly complete or complete in the strong sense if for every set of premises Γ, any formula that semantically follows from Γ is derivable from Γ. That is:

Refutation-completeness

A formal system S is refutation-complete if it is able to derive false from every unsatisfiable set of formulas. That is,

[2]

Every strongly complete system is also refutation complete. Intuitively, strong completeness means that, given a formula set , it is possible to compute every semantical consequence of , while refutation completeness means that, given a formula set and a formula , it is possible to check whether is a semantical consequence of .

Examples of refutation-complete systems include: SLD resolution on Horn clauses, superposition on equational clausal first-order logic, Robinson's resolution on clause sets. [3] The latter is not strongly complete: e.g. holds even in the propositional subset of first-order logic, but cannot be derived from by resolution. However, can be derived.

Syntactical completeness

A formal system S is syntactically complete or deductively complete or maximally complete if for each sentence (closed formula) φ of the language of the system either φ or ¬φ is a theorem of S. This is also called negation completeness , and is stronger than semantic completeness. In another sense, a formal system is syntactically complete if and only if no unprovable sentence can be added to it without introducing an inconsistency. Truth-functional propositional logic and first-order predicate logic are semantically complete, but not syntactically complete (for example, the propositional logic statement consisting of a single propositional variable A is not a theorem, and neither is its negation). Gödel's incompleteness theorem shows that any computable system that is sufficiently powerful, such as Peano arithmetic, cannot be both consistent and syntactically complete.

Structural completeness

In superintuitionistic and modal logics, a logic is structurally complete if every admissible rule is derivable.

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Gödel's completeness theorem</span> Fundamental theorem in mathematical logic

Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.

Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when for no formula .

<span class="mw-page-title-main">Contradiction</span> Logical incompatibility between two or more propositions

In traditional logic, a contradiction occurs when a proposition conflicts either with itself or established fact. It is often used as a tool to detect disingenuous beliefs and bias. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

Metalogic is the study of the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths.

<span class="mw-page-title-main">Syntax (logic)</span> Rules used for constructing, or transforming the symbols and words of a language

In logic, syntax is anything having to do with formal languages or formal systems without regard to any interpretation or meaning given to them. Syntax is concerned with the rules used for constructing, or transforming the symbols and words of a language, as contrasted with the semantics of a language which is concerned with its meaning.

In predicate logic, generalization is a valid inference rule. It states that if has been derived, then can be derived.

In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).

In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.

In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.

In mathematical logic, a theory is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.

In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", "", etc..

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In logic, the symbol ⊨, ⊧ or is called the double turnstile. It is often read as "entails", "models", "is a semanticconsequence of" or "is stronger than". It is closely related to the turnstile symbol , which has a single bar across the middle, and which denotes syntactic consequence.

In propositional logic, tautology is either of two commonly used rules of replacement. The rules are used to eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are:

Logical consequence is a fundamental concept in logic which describes the relationship between statements that hold true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusion is entailed by the premises, because the conclusion is the consequence of the premises. The philosophical analysis of logical consequence involves the questions: In what sense does a conclusion follow from its premises? and What does it mean for a conclusion to be a consequence of premises? All of philosophical logic is meant to provide accounts of the nature of logical consequence and the nature of logical truth.

References

  1. Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Press, 1971
  2. David A. Duffy (1991). Principles of Automated Theorem Proving. Wiley. Here: sect. 2.2.3.1, p.33
  3. Stuart J. Russell, Peter Norvig (1995). Artificial Intelligence: A Modern Approach . Prentice Hall. Here: sect. 9.7, p.286