Literal (mathematical logic)

Last updated

In mathematical logic, a literal is an atomic formula (also known as an atom or prime formula) or its negation. [1] [2] The definition mostly appears in proof theory (of classical logic), e.g. in conjunctive normal form and the method of resolution.

Contents

Literals can be divided into two types: [2]

The polarity of a literal is positive or negative depending on whether it is a positive or negative literal.

In logics with double negation elimination (where ) the complementary literal or complement of a literal can be defined as the literal corresponding to the negation of . [3] We can write to denote the complementary literal of . More precisely, if then is and if then is . Double negation elimination occurs in classical logics but not in intuitionistic logic.

In the context of a formula in the conjunctive normal form, a literal is pure if the literal's complement does not appear in the formula.

In Boolean functions, each separate occurrence of a variable, either in inverse or uncomplemented form, is a literal. For example, if , and are variables then the expression contains three literals and the expression contains four literals. However, the expression would also be said to contain four literals, because although two of the literals are identical ( appears twice) these qualify as two separate occurrences. [4]

Examples

In propositional calculus a literal is simply a propositional variable or its negation.

In predicate calculus a literal is an atomic formula or its negation, where an atomic formula is a predicate symbol applied to some terms, with the terms recursively defined starting from constant symbols, variable symbols, and function symbols. For example, is a negative literal with the constant symbol 2, the variable symbols x, y, the function symbols f, g, and the predicate symbol Q.

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">De Morgan's laws</span> Pair of logical equivalences

In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or — in philosophical logic — a cluster concept. As a normal form, it is useful in automated theorem proving.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when there is no formula such that and .

In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any", "for all", or "for any". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.

In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("x" or "∃(x)" or "(∃x)"). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain. Some sources use the term existentialization to refer to existential quantification.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. A formal language can be identified with the set of formulas in the language.

In mathematical logic, a formula is in negation normal form (NNF) if the negation operator (, not) is only applied to variables and the only other allowed Boolean operators are conjunction (, and) and disjunction (, or).

<i>Begriffsschrift</i> 1879 book on logic by Gottlob Frege

Begriffsschrift is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book.

Golem is an inductive logic programming algorithm developed by Stephen Muggleton and Cao Feng in 1990. It uses the technique of relative least general generalisation proposed by Gordon Plotkin, leading to a bottom-up search through the subsumption lattice. In 1992, shortly after its introduction, Golem was considered the only inductive logic programming system capable of scaling to tens of thousands of examples.

In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.

In mathematical logic, a ground term of a formal system is a term that does not contain any variables. Similarly, a ground formula is a formula that does not contain any variables.

In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.

In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.

In mathematical logic, an atomic formula is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives.

In mathematical logic, a tautology is a formula or assertion that is true in every possible interpretation. An example is "x=y or x≠y". Similarly, "either the ball is green, or the ball is not green" is always true, regardless of the colour of the ball.

In logic, a clause is a propositional formula formed from a finite collection of literals and logical connectives. A clause is true either whenever at least one of the literals that form it is true, or when all of the literals that form it are true. That is, it is a finite disjunction or conjunction of literals, depending on the context. Clauses are usually written as follows, where the symbols are literals:

An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.

References

Notes

  1. Rautenberg (2010, p. 57): "The formulas procured by (F1) and (F2) are said to be prime or atomic formulas, or simply called prime. As in propositional logic, prime formulas and their negations are called literals."
  2. 1 2 Ben-Ari (2001, p. 30): "A literal is an atom or a negation of an atom. An atom is a positive literal and the negation of an atom is a negative literal."
  3. Ben-Ari (2001, p. 69): "If is a literal, is its complement. This means that if , then, and if then ."
  4. Godse & Godse 2008.