Resolution proof compression by splitting

Last updated

In mathematical logic, proof compression by splitting is an algorithm that operates as a post-process on resolution proofs. It was proposed by Scott Cotton in his paper "Two Techniques for Minimizing Resolution Proof". [1]

Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

In proof theory, an area of mathematical logic, proof compression is the problem of algorithmically compressing formal proofs. The developed algorithms can be used to improve the proofs generated by automated theorem proving tools such as sat-solvers, SMT-solvers, first-order theorem provers and proof assistants.

Algorithm An unambiguous specification of how to solve a class of problems

In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, and other tasks.

The Splitting algorithm is based on the following observation:

Given a proof of unsatisfiability and a variable , it is easy to re-arrange (split) the proof in a proof of and a proof of and the recombination of these two proofs (by an additional resolution step) may result in a proof smaller than the original.

Note that applying Splitting in a proof using a variable does not invalidates a latter application of the algorithm using a differente variable . Actually, the method proposed by Cotton [1] generates a sequence of proofs , where each proof is the result of applying Splitting to . During the construction of the sequence, if a proof happens to be too large, is set to be the smallest proof in .

For achieving a better compression/time ratio, a heuristic for variable selection is desirable. For this purpose, Cotton [1] defines the "additivity" of a resolution step (with antecedents and and resolvent ):

Then, for each variable , a score is calculated summing the additivity of all the resolution steps in with pivot together with the number of these resolution steps. Denoting each score calculated this way by , each variable is selected with a probability proportional to its score:

To split a proof of unsatisfiability in a proof of and a proof of , Cotton [1] proposes the following:

Let denote a literal and denote the resolvent of clauses and where and . Then, define the map on nodes in the resolution dag of :

Also, let be the empty clause in . Then, and are obtained by computing and , respectively.

Notes

  1. 1 2 3 4 Cotton, Scott. "Two Techniques for Minimizing Resolution Proofs". 13th International Conference on Theory and Applications of Satisfiability Testing, 2010.

Related Research Articles

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

In algebra and algebraic geometry, the spectrum of a commutative ring R, denoted by , is the set of all prime ideals of R. It is commonly augmented with the Zariski topology and with a structure sheaf, turning it into a locally ringed space. A locally ringed space of this form is called an affine scheme.

Orthogonal group mathematical group consisting of transformations of a Euclidean space which preserve distance and a fixed point

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication; an orthogonal matrix is a real matrix whose inverse equals its transpose.

Unitary group A group of unitary matrices

In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C). Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group.

In mathematics, the adele ring is defined in class field theory, a branch of algebraic number theory. It allows one to elegantly describe the Artin reciprocity law. The adele ring is a self-dual topological ring, which is built on a global field. It is the restricted product of all the completions of the global field and therefore contains all the completions of the global field.

Rayleigh distribution

In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. It is essentially a chi distribution with two degrees of freedom.

In mathematics, in particular in algebraic topology, differential geometry and algebraic geometry, the Chern classes are characteristic classes associated with complex vector bundles.

In mathematics, the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.

Method of analytic tableaux

In proof theory, the semantic tableau is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics.

In mathematics, particularly in functional analysis, a projection-valued measure (PVM) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. Projection-valued measures are formally similar to real-valued measures, except that their values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.

In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic and first-order logic. In other words, iteratively applying the resolution rule in a suitable way allows for telling whether a propositional formula is satisfiable and for proving that a first-order formula is unsatisfiable. Attempting to prove a satisfiable first-order formula as unsatisfiable may result in a nonterminating computation; this problem doesn't occur in propositional logic.

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

In computational complexity theory, the Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that bounded-error probabilistic polynomial (BPP) time is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In mathematics, a π-system on a set Ω is a collection P of certain subsets of Ω, such that

In algebraic geometry, the normal cone CXY of a subscheme X of a scheme Y is a scheme analogous to the normal bundle or tubular neighborhood in differential geometry.

SLD resolution is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.

In proof theory, an area of mathematical logic, resolution proof reduction via local context rewriting is a technique for resolution proof reduction via local context rewriting. This proof compression method was presented as an algorithm named ReduceAndReconstruct, that operates as a post-processing of resolution proofs.

In mathematics, the tensor product of representations is a tensor product of vector spaces underlying representations together with the factor-wise group action on the product. This construction, together with the Clebsch–Gordan procedure, can be used to generate additional irreducible representations if one already knows a few.

Gradient discretisation method

In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes.