In logic and theoretical computer science, and specifically proof theory and computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in various propositional proof systems. For example, among the major challenges of proof complexity is showing that the Frege system, the usual propositional calculus, does not admit polynomial-size proofs of all tautologies. Here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.
Systematic study of proof complexity began with the work of Stephen Cook and Robert Reckhow (1979) who provided the basic definition of a propositional proof system from the perspective of computational complexity. Specifically Cook and Reckhow observed that proving proof size lower bounds on stronger and stronger propositional proof systems can be viewed as a step towards separating NP from coNP (and thus P from NP), since the existence of a propositional proof system that admits polynomial size proofs for all tautologies is equivalent to NP=coNP.
Contemporary proof complexity research draws ideas and methods from many areas in computational complexity, algorithms and mathematics. Since many important algorithms and algorithmic techniques can be cast as proof search algorithms for certain proof systems, proving lower bounds on proof sizes in these systems implies run-time lower bounds on the corresponding algorithms. This connects proof complexity to more applied areas such as SAT solving.
Mathematical logic can also serve as a framework to study propositional proof sizes. First-order theories and, in particular, weak fragments of Peano arithmetic, which come under the name of bounded arithmetic, serve as uniform versions of propositional proof systems and provide further background for interpreting short propositional proofs in terms of various levels of feasible reasoning.
A propositional proof system is given as a proof-verification algorithm P(A,x) with two inputs. If P accepts the pair (A,x) we say that x is a P-proof of A. P is required to run in polynomial time, and moreover, it must hold that A has a P-proof if and only if A is a tautology.
Examples of propositional proof systems include sequent calculus, resolution, cutting planes and Frege systems. Strong mathematical theories such as ZFC induce propositional proof systems as well: a proof of a tautology in a propositional interpretation of ZFC is a ZFC-proof of a formalized statement ' is a tautology'.
Proof complexity measures the efficiency of the proof system usually in terms of the minimal size of proofs possible in the system for a given tautology. The size of a proof (respectively formula) is the number of symbols needed to represent the proof (respectively formula). A propositional proof system P is polynomially bounded if there exists a constant such that every tautology of size has a P-proof of size . A central question of proof complexity is to understand if tautologies admit polynomial-size proofs. Formally,
Problem (NP vs. coNP)
Does there exist a polynomially bounded propositional proof system?
Cook and Reckhow (1979) observed that there exists a polynomially bounded proof system if and only if NP=coNP. Therefore, proving that specific proof systems do not admit polynomial size proofs can be seen as a partial progress towards separating NP and coNP (and thus P and NP). [1]
Proof complexity compares the strength of proof systems using the notion of simulation. A proof system Pp-simulates a proof system Q if there is a polynomial-time function that given a Q-proof of a tautology outputs a P-proof of the same tautology. If P p-simulates Q and Q p-simulates P, the proof systems P and Q are p-equivalent. There is also a weaker notion of simulation: a proof system Psimulates a proof system Q if there is a polynomial p such that for every Q-proof x of a tautology A, there is a P-proof y of A such that the length of y, |y| is at most p(|x|).
For example, sequent calculus is p-equivalent to (every) Frege system. [2]
A proof system is p-optimal if it p-simulates all other proof systems, and it is optimal if it simulates all other proof systems. It is an open problem whether such proof systems exist:
Problem (Optimality)
Does there exist a p-optimal or optimal propositional proof system?
Every propositional proof system P can be simulated by Extended Frege extended with axioms postulating soundness of P. [3] The existence of an optimal (respectively p-optimal) proof system is known to follow from the assumption that NE=coNE (respectively E=NE). [4]
For many weak proof systems it is known that they do not simulate certain stronger systems (see below). However, the question remains open if the notion of simulation is relaxed. For example, it is open whether Resolution effectively polynomially simulates Extended Frege. [5]
An important question in proof complexity is to understand the complexity of searching for proofs in proof systems.
Problem (Automatability)
Are there efficient algorithms searching for proofs in standard proof systems such as Resolution or the Frege system?
The question can be formalized by the notion of automatability (also known as automatizability). [6]
A proof system P is automatable if there is an algorithm that given a tautology outputs a P-proof of in time polynomial in the size of and the length of the shortest P-proof of . Note that if a proof system is not polynomially bounded, it can still be automatable. A proof system P is weakly automatable if there is a proof system R and an algorithm that given a tautology outputs an R-proof of in time polynomial in the size of and the length of the shortest P-proof of .
Many proof systems of interest are believed to be non-automatable. However, currently only conditional negative results are known.
It is not known if the weak automatability of Resolution would break any standard complexity-theoretic hardness assumptions.
On the positive side,
Propositional proof systems can be interpreted as nonuniform equivalents of theories of higher order. The equivalence is most often studied in the context of theories of bounded arithmetic. For example, the Extended Frege system corresponds to Cook's theory formalizing polynomial-time reasoning and the Frege system corresponds to the theory formalizing reasoning.
The correspondence was introduced by Stephen Cook (1975), who showed that coNP theorems, formally formulas, of the theory translate to sequences of tautologies with polynomial-size proofs in Extended Frege. Moreover, Extended Frege is the weakest such system: if another proof system P has this property, then P simulates Extended Frege. [19]
An alternative translation between second-order statements and propositional formulas given by Jeff Paris and Alex Wilkie (1985) has been more practical for capturing subsystems of Extended Frege such as Frege or constant-depth Frege. [20] [21]
While the above-mentioned correspondence says that proofs in a theory translate to sequences of short proofs in the corresponding proof system, a form of the opposite implication holds as well. It is possible to derive lower bounds on size of proofs in a proof system P by constructing suitable models of a theory T corresponding to the system P. This allows to prove complexity lower bounds via model-theoretic constructions, an approach known as Ajtai's method. [22]
Propositional proof systems can be interpreted as nondeterministic algorithms for recognizing tautologies. Proving a superpolynomial lower bound on a proof system P thus rules out the existence of a polynomial-time algorithm for SAT based on P. For example, runs of the DPLL algorithm on unsatisfiable instances correspond to tree-like Resolution refutations. Therefore, exponential lower bounds for tree-like Resolution (see below) rule out the existence of efficient DPLL algorithms for SAT. Similarly, exponential Resolution lower bounds imply that SAT solvers based on Resolution, such as CDCL algorithms cannot solve SAT efficiently (in worst-case).
Proving lower bounds on lengths of propositional proofs is generally very difficult. Nevertheless, several methods for proving lower bounds for weak proof systems have been discovered.
It is a long-standing open problem to derive a nontrivial lower bound for the Frege system.
Consider a tautology of the form . The tautology is true for every choice of , and after fixing the evaluation of and are independent because they are defined on disjoint sets of variables. This means that it is possible to define an interpolant circuit , such that both and hold. The interpolant circuit decides either if is false or if is true, by only considering . The nature of the interpolant circuit can be arbitrary. Nevertheless, it is possible to use a proof of the initial tautology as a hint on how to construct . A proof systems P is said to have feasible interpolation if the interpolant is efficiently computable from any proof of the tautology in P. The efficiency is measured with respect to the length of the proof: it is easier to compute interpolants for longer proofs, so this property seems to be anti-monotone in the strength of the proof system.
The following three statements cannot be simultaneously true: (a) has a short proof in a some proof system; (b) such proof system has feasible interpolation; (c) the interpolant circuit solves a computationally hard problem. It is clear that (a) and (b) imply that there is a small interpolant circuit, which is in contradiction with (c). Such relation allows the conversion of proof length upper bounds into lower bounds on computations, and dually to turn efficient interpolation algorithms into lower bounds on proof length.
Some proof systems such as Resolution and Cutting Planes admit feasible interpolation or its variants. [28] [29]
Feasible interpolation can be seen as a weak form of automatability. In fact, for many proof systems, such as Extended Frege, feasible interpolation is equivalent to weak automatability. Specifically, many proof systems P are able to prove their own soundness, which is a tautology stating that `if is a P-proof of a formula then holds'. Here, are encoded by free variables. Moreover, it is possible to generate P-proofs of in polynomial-time given the length of and . Therefore, an efficient interpolant resulting from short P-proofs of soundness of P would decide whether a given formula admits a short P-proof . Such an interpolant can be used to define a proof system R witnessing that P is weakly automatable. [30] On the other hand, weak automatability of a proof system P implies that P admits feasible interpolation. However, if a proof system P does not prove efficiently its own soundness, then it might not be weakly automatable even if it admits feasible interpolation.
Many non-automatability results provide evidence against feasible interpolation in the respective systems.
The idea of comparing the size of proofs can be used for any automated reasoning procedure that generates a proof. Some research has been done about the size of proofs for propositional non-classical logics, in particular, intuitionistic, modal, and non-monotonic logics.
Hrubeš (2007–2009) proved exponential lower bounds on size of proofs in the Extended Frege system in some modal logics and in intuitionistic logic using a version of monotone feasible interpolation. [34] [35] [36]
In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.
Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at the University of Toronto, Department of Computer Science and Department of Mathematics.
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties each receive a -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them.
In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover possesses unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.
In computational complexity theory, a natural proof is a certain kind of proof establishing that one complexity class differs from another one. While these proofs are in some sense "natural", it can be shown that no such proof can possibly be used to solve the P vs. NP problem.
In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.
Russell Graham Impagliazzo is a professor of computer science at the University of California, San Diego, specializing in computational complexity theory.
In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable.
Aleksandr Aleksandrovich Razborov, sometimes known as Sasha Razborov, is a Soviet and Russian mathematician and computational theorist. He is Andrew McLeish Distinguished Service Professor at the University of Chicago.
In mathematical logic, a tautology is a formula or assertion that is true in every possible interpretation. An example is "x=y or x≠y". Similarly, "either the ball is green, or the ball is not green" is always true, regardless of the colour of the ball.
In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .
In proof complexity, a Frege system is a propositional proof system whose proofs are sequences of formulas derived using a finite set of sound and implicationally complete inference rules. Frege systems are named after Gottlob Frege.
In propositional calculus and proof complexity a propositional proof system (pps), also called a Cook–Reckhow propositional proof system, is a system for proving classical propositional tautologies.
In graph theory, a branch of mathematics, the Hajós construction is an operation on graphs named after György Hajós that may be used to construct any critical graph or any graph whose chromatic number is at least some given threshold.
Samuel R. (Sam) Buss is an American computer scientist and mathematician who has made major contributions to the fields of mathematical logic, complexity theory and proof complexity. He is currently a professor at the University of California, San Diego, Department of Computer Science and Department of Mathematics.
Toniann Pitassi is a Canadian-American mathematician and computer scientist specializing in computational complexity theory. She is currently Jeffrey L. and Brenda Bleustein Professor of Engineering at Columbia University and was Bell Research Chair at the University of Toronto.
Bounded arithmetic is a collective name for a family of weak subtheories of Peano arithmetic. Such theories are typically obtained by requiring that quantifiers be bounded in the induction axiom or equivalent postulates. The main purpose is to characterize one or another class of computational complexity in the sense that a function is provably total if and only if it belongs to a given complexity class. Further, theories of bounded arithmetic present uniform counterparts to standard propositional proof systems such as Frege system and are, in particular, useful for constructing polynomial-size proofs in these systems. The characterization of standard complexity classes and correspondence to propositional proof systems allows to interpret theories of bounded arithmetic as formal systems capturing various levels of feasible reasoning.
María Luisa Bonet Carbonell is a Spanish computer scientist interested in logic in computer science, including proof complexity and algorithms for the maximum satisfiability problem. She is a professor of computer science at the Polytechnic University of Catalonia.
{{cite book}}
: CS1 maint: date and year (link)