LowerUnivalents

Last updated

In proof compression, an area of mathematical logic, LowerUnivalents is an algorithm used for the compression of propositional resolution proofs. LowerUnivalents is a generalised algorithm of the LowerUnits, and it is able to lower not only units but also subproofs of non-unit clauses, provided that they satisfy some additional conditions. [1]

In proof theory, an area of mathematical logic, proof compression is the problem of algorithmically compressing formal proofs. The developed algorithms can be used to improve the proofs generated by automated theorem proving tools such as sat-solvers, SMT-solvers, first-order theorem provers and proof assistants.

Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

Algorithm An unambiguous specification of how to solve a class of problems

In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, and other tasks.

Related Research Articles

Automated theorem proving is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

Stephen Cook Canadian computer scientist

Stephen Arthur Cook, is an American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity. He is a university professor at the University of Toronto, Department of Computer Science and Department of Mathematics.

Image compression Reduction of image size to save storage and transmission costs

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.

Compression artifact noticeable distortion of media caused by the application of lossy data compression

A compression artifact is a noticeable distortion of media caused by the application of lossy compression.

In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form which gives it useful properties for use in logic programming, formal specification, and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951.

In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn.

In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.

DPLL algorithm

In computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem.

In theoretical computer science, and specifically computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in various propositional proof systems. For example, among the major challenges of proof complexity is showing that the usual propositional-calculus does not admit polynomial-size proofs of all tautologies ; here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.

Fitch notation, also known as Fitch diagrams, is a notational system for constructing formal proofs used in sentential logics and predicate logics. Fitch-style proofs arrange the sequence of sentences that make up the proof into rows. A unique feature of Fitch notation is that the degree of indentation of each row conveys which assumptions are active for that step.

SLD resolution is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.

In proof complexity, a Frege system is a propositional proof system whose proofs are sequences of formulas derived using a finite set of sound and implicationally complete inference rules. Frege systems are named after Gottlob Frege.

In propositional calculus and proof complexity a propositional proof system (pps), also called a Cook–Reckhow propositional proof system, is system for proving classical propositional tautologies.

Homotopy type theory

In mathematical logic and computer science, homotopy type theory refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.

Toniann Pitassi is a Canadian and American mathematician and computer scientist specializing in computational complexity theory.

In proof theory, an area of mathematical logic, resolution proof reduction via local context rewriting is a technique for resolution proof reduction via local context rewriting. This proof compression method was presented as an algorithm named ReduceAndReconstruct, that operates as a post-processing of resolution proofs.

In proof compression LowerUnits (LU) is an algorithm used to compress propositional logic resolution proofs. The main idea of LowerUnits is to exploit the following fact:

Theorem: Let   be a potentially redundant proof, and   be the redundant proof | redundant node. If  ’s clause is a unit clause,  then   is redundant.

In mathematical logic, a redundant proof is a proof that has a subset that is a shorter proof of the same result. That is, a proof of is considered redundant if there exists another proof of such that and where is the number of nodes in .

Univalent foundations are an approach to the foundations of mathematics in which mathematical structures are built out of objects called types. Types in the univalent foundations do not correspond exactly to anything in set-theoretic foundations, but they may be thought of as spaces, with equal types corresponding to homotopy equivalent spaces and with equal elements of a type corresponding to points of a space connected by a path. Univalent foundations are inspired both by the old Platonic ideas of Hermann Grassmann and Georg Cantor and by the "categorical" mathematics in the style of Alexander Grothendieck. It departs from the use of predicate logic as the underlying formal deduction system, replacing it, at the moment, by a version of the Martin-Löf type theory. The development of the univalent foundations is closely related with the development of homotopy type theory.

References

  1. Boudou, J., & Paleo, B. W. (2013). Compression of propositional resolution proofs by lowering subproofs. In Automated Reasoning with Analytic Tableaux and Related Methods (pp. 59-73). Springer Berlin Heidelberg.