In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic (also known as Second-order propositional logic) where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT).
In computational complexity theory, the quantified Boolean formula problem (QBF) is a generalization of the Boolean satisfiability problem in which both existential quantifiers and universal quantifiers can be applied to each variable. Put another way, it asks whether a quantified sentential form over a set of Boolean variables is true or false. For example, the following is an instance of QBF:
QBF is the canonical complete problem for PSPACE, the class of problems solvable by a deterministic or nondeterministic Turing machine in polynomial space and unlimited time. [1] Given the formula in the form of an abstract syntax tree, the problem can be solved easily by a set of mutually recursive procedures which evaluate the formula. Such an algorithm uses space proportional to the height of the tree, which is linear in the worst case, but uses time exponential in the number of quantifiers.
Provided that MA ⊊ PSPACE, which is widely believed, QBF cannot be solved, nor can a given solution even be verified, in either deterministic or probabilistic polynomial time (in fact, unlike the satisfiability problem, there's no known way to specify a solution succinctly). It can be solved using an alternating Turing machine in linear time, since AP = PSPACE, where AP is the class of problems alternating machines can solve in polynomial time. [2]
When the seminal result IP = PSPACE was shown (see interactive proof system), it was done by exhibiting an interactive proof system that could solve QBF by solving a particular arithmetization of the problem. [3]
QBF formulas have a number of useful canonical forms. For example, it can be shown that there is a polynomial-time many-one reduction that will move all quantifiers to the front of the formula and make them alternate between universal and existential quantifiers. There is another reduction that proved useful in the IP = PSPACE proof where no more than one universal quantifier is placed between each variable's use and the quantifier binding that variable. This was critical in limiting the number of products in certain subexpressions of the arithmetization.
A fully quantified Boolean formula can be assumed to have a very specific form, called prenex normal form. It has two basic parts: a portion containing only quantifiers and a portion containing an unquantified Boolean formula usually denoted as . If there are Boolean variables, the entire formula can be written as
where every variable falls within the scope of some quantifier. By introducing dummy variables, any formula in prenex normal form can be converted into a sentence where existential and universal quantifiers alternate. Using the dummy variable ,
The second sentence has the same truth value but follows the restricted syntax. Assuming fully quantified Boolean formulas to be in prenex normal form is a frequent feature of proofs.
This section possibly contains original research .(May 2021) |
There is a simple recursive algorithm for determining whether a QBF is in TQBF (i.e. is true). Given some QBF
If the formula contains no quantifiers, we can just return the formula. Otherwise, we take off the first quantifier and check both possible values for the first variable:
If , then return . If , then return . [4]
How fast does this algorithm run? For every quantifier in the initial QBF, the algorithm makes two recursive calls on only a linearly smaller subproblem. This gives the algorithm an exponential runtime O(2n).[ citation needed ]
How much space does this algorithm use? Within each invocation of the algorithm, it needs to store the intermediate results of computing A and B. Every recursive call takes off one quantifier, so the total recursive depth is linear in the number of quantifiers. Formulas that lack quantifiers can be evaluated in space logarithmic in the number of variables. The initial QBF was fully quantified, so there are at least as many quantifiers as variables. Thus, this algorithm uses O(n + log n) = O(n) space. This makes the TQBF language part of the PSPACE complexity class.[ citation needed ]
This section needs expansion. You can help by adding to it. (May 2021) |
Despite the PSPACE-completeness of QBF, many solvers have been developed to solve these instances. (This is analogous to the situation with SAT, the single existential quantifier version; even though it is NP-complete, it is still possible to solve many SAT instances heuristically.) [5] [6] The case where there are only 2 quantifiers, known as 2QBF, has received special attention. [7] [ weasel words ]
The QBF solver competition QBFEVAL has been running more-or-less annually since 2004; [5] [6] solvers are required to read instances in QDIMACS format and either the QCIR or QAIGER formats. [8] High-performing QBF solvers generally use QDPLL (a generalization of DPLL) or CEGAR. [5] [6] [7] Research into QBF solving began with the development of backtracking DPLL for QBF in 1998, followed by the introduction of clause learning and variable elimination in 2002; [9] thus, as compared to SAT solving, which has been under development since the 1960s, QBF is a relatively young field of research as of 2017. [9] [ weasel words ]
Some prominent QBF solvers include:
This section needs expansion. You can help by adding to it. (May 2021) |
QBF solvers can be applied to planning (in artificial intelligence), including safe planning; the latter is critical in applications of robotics. [14] QBF solvers can also be applied to bounded model checking as they provide a shorter encoding than would be needed for a SAT-based solver. [14]
The evaluation of a QBF can be seen as a two-player game between a player who controls existentially quantified variables and a player who controls universally quantified variables. This makes QBFs suitable for encoding reactive synthesis problems. [14] Similarly, QBF solvers can be used to model adversarial games in game theory. For example, QBF solvers can be used to find winning strategies for games of geography, which can then be automatically played interactively. [15]
QBF solvers can be used for formal equivalence checking, and can also be used to synthesize Boolean functions. [14]
Other types of problems that can be encoded as QBFs include:
In QBFEVAL 2020, a "DQBF Track" was introduced where instances were allowed to have Henkin quantifiers (expressed in DQDIMACS format). [8]
The TQBF language serves in complexity theory as the canonical PSPACE-complete problem. Being PSPACE-complete means that a language is in PSPACE and that the language is also PSPACE-hard. The algorithm above shows that TQBF is in PSPACE. Showing that TQBF is PSPACE-hard requires showing that any language in the complexity class PSPACE can be reduced to TQBF in polynomial time. I.e.,
This means that, for a PSPACE language L, whether an input x is in L can be decided by checking whether is in TQBF, for some function f that is required to run in polynomial time (relative to the length of the input). Symbolically,
Proving that TQBF is PSPACE-hard, requires specification of f.
So, suppose that L is a PSPACE language. This means that L can be decided by a polynomial space deterministic Turing machine (DTM). This is very important for the reduction of L to TQBF, because the configurations of any such Turing Machine can be represented as Boolean formulas, with Boolean variables representing the state of the machine as well as the contents of each cell on the Turing Machine tape, with the position of the Turing Machine head encoded in the formula by the formula's ordering. In particular, our reduction will use the variables and , which represent two possible configurations of the DTM for L, and a natural number t, in constructing a QBF which is true if and only if the DTM for L can go from the configuration encoded in to the configuration encoded in in no more than t steps. The function f, then, will construct from the DTM for L a QBF , where is the DTM's starting configuration, is the DTM's accepting configuration, and T is the maximum number of steps the DTM could need to move from one configuration to the other. We know that T = O(exp(nk)) for some k, where n is the length of the input, because this bounds the total number of possible configurations of the relevant DTM. Of course, it cannot take the DTM more steps than there are possible configurations to reach unless it enters a loop, in which case it will never reach anyway.
At this stage of the proof, we have already reduced the question of whether an input formula w (encoded, of course, in ) is in L to the question of whether the QBF , i.e., , is in TQBF. The remainder of this proof proves that f can be computed in polynomial time.
For , computation of is straightforward—either one of the configurations changes to the other in one step or it does not. Since the Turing Machine that our formula represents is deterministic, this presents no problem.
For , computation of involves a recursive evaluation, looking for a so-called "middle point" . In this case, we rewrite the formula as follows:
This converts the question of whether can reach in t steps to the question of whether reaches a middle point in steps, which itself reaches in steps. The answer to the latter question of course gives the answer to the former.
Now, t is only bounded by T, which is exponential (and so not polynomial) in the length of the input. Additionally, each recursive layer virtually doubles the length of the formula. (The variable is only one midpoint—for greater t, there are more stops along the way, so to speak.) So the time required to recursively evaluate in this manner could be exponential as well, simply because the formula could become exponentially large. This problem is solved by universally quantifying using variables and over the configuration pairs (e.g., ), which prevents the length of the formula from expanding due to recursive layers. This yields the following interpretation of :
This version of the formula can indeed be computed in polynomial time, since any one instance of it can be computed in polynomial time. The universally quantified ordered pair simply tells us that whichever choice of is made, .
Thus, , so TQBF is PSPACE-hard. Together with the above result that TQBF is in PSPACE, this completes the proof that TQBF is a PSPACE-complete language.
(This proof follows Sipser 2006 pp. 310–313 in all essentials. Papadimitriou 1994 also includes a proof.)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.
In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.
In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.
In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called arithmetical. The arithmetical hierarchy was invented independently by Kleene (1943) and Mostowski (1946).
A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers.
In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.
Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.
In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. It is equal to the class PSPACE. The result was established in a series of papers: the first by Lund, Karloff, Fortnow, and Nisan showed that co-NP had multiple prover interactive proofs; and the second, by Shamir, employed their technique to establish that IP=PSPACE. The result is a famous example where the proof does not relativize.
In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then
A formula game is an artificial game represented by a fully quantified Boolean formula such as .
In model theory and related areas of mathematics, a type is an object that describes how a element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,..., xn that are true of a set of n-tuples of an L-structure . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure . The question of which types represent actual elements of leads to the ideas of saturated models and omitting types.
In the study of formal theories in mathematical logic, bounded quantifiers are often included in a formal language in addition to the standard quantifiers "∀" and "∃". Bounded quantifiers differ from "∀" and "∃" in that bounded quantifiers restrict the range of the quantified variable. The study of bounded quantifiers is motivated by the fact that determining whether a sentence with only bounded quantifiers is true is often not as difficult as determining whether an arbitrary sentence is true.
In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.
Bounded arithmetic is a collective name for a family of weak subtheories of Peano arithmetic. Such theories are typically obtained by requiring that quantifiers be bounded in the induction axiom or equivalent postulates. The main purpose is to characterize one or another class of computational complexity in the sense that a function is provably total if and only if it belongs to a given complexity class. Further, theories of bounded arithmetic present uniform counterparts to standard propositional proof systems such as Frege system and are, in particular, useful for constructing polynomial-size proofs in these systems. The characterization of standard complexity classes and correspondence to propositional proof systems allows to interpret theories of bounded arithmetic as formal systems capturing various levels of feasible reasoning.