In mathematical logic, a formula is satisfiable if it is true under some assignment of values to its variables. For example, the formula is satisfiable because it is true when and , while the formula is not satisfiable over the integers. The dual concept to satisfiability is validity; a formula is valid if every assignment of values to its variables makes the formula true. For example, is valid over the integers, but is not.
Formally, satisfiability is studied with respect to a fixed logic defining the syntax of allowed symbols, such as first-order logic, second-order logic or propositional logic. Rather than being syntactic, however, satisfiability is a semantic property because it relates to the meaning of the symbols, for example, the meaning of in a formula such as . Formally, we define an interpretation (or model) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, and a formula is said to be satisfiable if there is some interpretation which makes it true. [1] While this allows non-standard interpretations of symbols such as , one can restrict their meaning by providing additional axioms. The satisfiability modulo theories problem considers satisfiability of a formula with respect to a formal theory, which is a (finite or infinite) set of axioms.
Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such as Peano arithmetic are satisfiable because they are true in the natural numbers. This concept is closely related to the consistency of a theory, and in fact is equivalent to consistency for first-order logic, a result known as Gödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.
The problem of determining whether a formula in propositional logic is satisfiable is decidable, and is known as the Boolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence of first-order logic is satisfiable is not decidable. In universal algebra, equational theory, and automated theorem proving, the methods of term rewriting, congruence closure and unification are used to attempt to decide satisfiability. Whether a particular theory is decidable or not depends whether the theory is variable-free and on other conditions. [2]
For classical logics with negation, it is generally possible to re-express the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is false that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid.
For logics without negation, such as the positive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of the positive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem is co-NP complete.
In the case of classical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is an NP-complete problem, and is one of the most intensively studied problems in computational complexity theory.
For first-order logic (FOL), satisfiability is undecidable. More specifically, it is a co-RE-complete problem and therefore not semidecidable. [3] This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly by David Hilbert, as the so-called Entscheidungsproblem. The universal validity of a formula is a semi-decidable problem by Gödel's completeness theorem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts the Church–Turing theorem, a result stating the negative answer for the Entscheidungsproblem.
In model theory, an atomic formula is satisfiable if there is a collection of elements of a structure that render the formula true. [4] If A is a structure, φ is a formula, and a is a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that
If φ has no free variables, that is, if φ is an atomic sentence, and it is satisfied by A, then one writes
In this case, one may also say that A is a model for φ, or that φ is true in A. If T is a collection of atomic sentences (a theory) satisfied by A, one writes
A problem related to satisfiability is that of finite satisfiability, which is the question of determining whether a formula admits a finite model that makes it true. For a logic that has the finite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field of finite model theory.
Finite satisfiability and satisfiability need not coincide in general. For instance, consider the first-order logic formula obtained as the conjunction of the following sentences, where and are constants:
The resulting formula has the infinite model , but it can be shown that it has no finite model (starting at the fact and following the chain of atoms that must exist by the second axiom, the finiteness of a model would require the existence of a loop, which would violate the third and fourth axioms, whether it loops back on or on a different element).
The computational complexity of deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them is decidable.
For classical first-order logic, finite satisfiability is recursively enumerable (in class RE) and undecidable by Trakhtenbrot's theorem applied to the negation of the formula.
Numerical constraints[ clarify ] often appear in the field of mathematical optimization, where one usually wants to maximize (or minimize) an objective function subject to some constraints. However, leaving aside the objective function, the basic issue of simply deciding whether the constraints are satisfiable can be challenging or undecidable in some settings. The following table summarizes the main cases.
Constraints | over reals | over integers |
---|---|---|
Linear | PTIME (see linear programming) | NP-complete (see integer programming) |
Polynomial | decidable through e.g. Cylindrical algebraic decomposition | undecidable (Hilbert's tenth problem) |
Table source: Bockmayr and Weispfenning. [5] : 754
For linear constraints, a fuller picture is provided by the following table.
Constraints over: | rationals | integers | natural numbers |
---|---|---|---|
Linear equations | PTIME | PTIME | NP-complete |
Linear inequalities | PTIME | NP-complete | NP-complete |
Table source: Bockmayr and Weispfenning. [5] : 755
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word ἀξίωμα (axíōma), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.
In logic and computer science, the Boolean satisfiability problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.
Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.
In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.
In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when for no formula .
In mathematical logic, the compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This theorem is an important tool in model theory, as it provides a useful method for constructing models of any set of sentences that is finitely consistent.
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
In logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.
In logic, a true/false decision problem is decidable if there exists an effective method for deriving the correct answer. Zeroth-order logic is decidable, whereas first-order and higher-order logic are not. Logical systems are decidable if membership in their set of logically valid formulas can be effectively determined. A theory in a fixed logical system is decidable if there is an effective method for determining whether arbitrary formulas are included in the theory. Many important problems are undecidable, that is, it has been proven that no effective method for determining membership can exist for them.
In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by R. M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable.
In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.
In mathematical logic, a theory is a set of sentences in a formal language. In most scenarios, a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable.
Constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories.
In theoretical computer science, the modal μ-calculus is an extension of propositional modal logic by adding the least fixed point operator μ and the greatest fixed point operator ν, thus a fixed-point logic.
An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
In mathematical logic the theory of pure equality is a first-order theory. It has a signature consisting of only the equality relation symbol, and includes no non-logical axioms at all.