Natural proof

Last updated

In computational complexity theory, a natural proof is a certain kind of proof establishing that one complexity class differs from another one. While these proofs are in some sense "natural", it can be shown (assuming a widely believed conjecture on the existence of pseudorandom functions) that no such proof can possibly be used to solve the P vs. NP problem.

Contents

Overview

The notion of natural proofs was introduced by Alexander Razborov and Steven Rudich in their article "Natural Proofs", first presented in 1994, and later published in 1997, for which they received the 2007 Gödel Prize. [1]

Specifically, natural proofs prove lower bounds on the circuit complexity of boolean functions. A natural proof shows, either directly or indirectly, that a boolean function has a certain natural combinatorial property. Under the assumption that pseudorandom functions exist with "exponential hardness" as specified in their main theorem, Razborov and Rudich show that these proofs cannot separate certain complexity classes. Notably, assuming pseudorandom functions exist, these proofs cannot separate the complexity classes P and NP. [2]

For example, their article states:

[...] consider a commonly envisioned proof strategy for proving P ≠ NP:
  • Formulate some mathematical notion of "discrepancy" or "scatter" or "variation" of the values of a Boolean function, or of an associated polytope or other structure. [...]
  • Show by an inductive argument that polynomial-sized circuits can only compute functions of "low" discrepancy. [...]
  • Then show that SAT, or some other function in NP, has "high" discrepancy.
Our main theorem in Section 4 gives evidence that no proof strategy along these lines can ever succeed.

A property of boolean functions is defined to be natural if it contains a property meeting the constructivity and largeness conditions defined by Razborov and Rudich. Roughly speaking, the constructivity condition requires that a property be decidable in (quasi-)polynomial time when the 2n-sized truth table of an n-input boolean function is given as input, asymptotically as n increases. This is the same as time singly exponential in n. Properties that are easy to understand are likely to satisfy this condition. The largeness condition requires that the property hold for a sufficiently large fraction of the set of all boolean functions.

A property is useful against a complexity class C if every sequence of boolean functions having the property infinitely often defines a language outside of C. A natural proof is a proof that establishes that a certain language lies outside of C and refers to a natural property that is useful against C.

Razborov and Rudich give a number of examples of lower-bound proofs against classes C smaller than P/poly that can be "naturalized", i.e. converted into natural proofs. An important example treats proofs that the parity problem is not in the class AC0. They give strong evidence that the techniques used in these proofs cannot be extended to show stronger lower bounds. In particular, AC0-natural proofs cannot be useful against AC0[m].

Razborov and Rudich also reproduce Avi Wigderson's unconditional proof that natural proofs cannot prove exponential lower bounds for the discrete logarithm problem.

There is strong current belief that the mechanism of this paper actually blocks lower-bound proofs against the complexity class TC0 of constant-depth, polynomial-sized threshold circuits, which is believed but not proven smaller than P/poly. [3] This belief is because, under widely believed conjectures regarding the hardness of factoring in certain elliptic curve groups, there exist exponentially hard pseudorandom functions computable in TC0. [4] However, some researchers believe that the Razborov-Rudich limitations are actually good guidance for what a "super-natural" lower-bound proof might involve, such as properties hard or complete for exponential space. [5]

Notes

  1. "ACM-SIGACT 2007 Gödel Prize". Archived from the original on 2016-03-03. Retrieved 2014-08-11.
  2. A. A. Razborov and S. Rudich (1997). "Natural proofs". Journal of Computer and System Sciences. 55: 24–35. doi: 10.1006/jcss.1997.1494 . (Draft)
  3. "Complexity Zoo:T - Complexity Zoo".
  4. Naor, Moni; Reingold, Omer (2004). "Number-theoretic constructions of efficient pseudo-random functions". Journal of the ACM. 51 (2): 231–262. doi:10.1145/972639.972643. S2CID   8665271.
  5. K. Regan (October 2002). "Understanding the Mulmuley-Sohoni Approach to P vs. NP" (PDF). Bulletin of the European Association for Theoretical Computer Science. 78: 86–97.

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.

Johan Torkel Håstad is a Swedish theoretical computer scientist most known for his work on computational complexity theory. He was the recipient of the Gödel Prize in 1994 and 2011 and the ACM Doctoral Dissertation Award in 1986, among other prizes. He has been a professor in theoretical computer science at KTH Royal Institute of Technology in Stockholm, Sweden since 1988, becoming a full professor in 1992. He is a member of the Royal Swedish Academy of Sciences since 2001.

In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.

<span class="mw-page-title-main">Russell Impagliazzo</span> American computer scientist

Russell Graham Impagliazzo is a professor of computer science at the University of California, San Diego, specializing in computational complexity theory.

In logic and theoretical computer science, and specifically proof theory and computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in various propositional proof systems. For example, among the major challenges of proof complexity is showing that the Frege system, the usual propositional calculus, does not admit polynomial-size proofs of all tautologies. Here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

Aleksandr Aleksandrovich Razborov, sometimes known as Sasha Razborov, is a Soviet and Russian mathematician and computational theorist. He is Andrew McLeish Distinguished Service Professor at the University of Chicago.

<span class="mw-page-title-main">Boolean circuit</span> Model of computation

In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

ACC<sup>0</sup>

ACC0, sometimes called ACC, is a class of computational models and problems defined in circuit complexity, a field of theoretical computer science. The class is defined by augmenting the class AC0 of constant-depth "alternating circuits" with the ability to count; the acronym ACC stands for "AC with counters". Specifically, a problem belongs to ACC0 if it can be solved by polynomial-size, constant-depth circuits of unbounded fan-in gates, including gates that count modulo a fixed integer. ACC0 corresponds to computation in any solvable monoid. The class is very well studied in theoretical computer science because of the algebraic connections and because it is one of the largest concrete computational models for which computational impossibility results, so-called circuit lower bounds, can be proved.

In computational complexity theory, a gadget is a subset of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to construct reductions from one computational problem to another, as part of proofs of NP-completeness or other types of computational hardness. The component design technique is a method for constructing reductions by using gadgets.

Richard Jay Lipton is an American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology. He has worked in computer science theory, cryptography, and DNA computing.

In Boolean algebra, a parity function is a Boolean function whose value is one if and only if the input vector has an odd number of ones. The parity function of two inputs is also known as the XOR function.

In propositional calculus and proof complexity a propositional proof system (pps), also called a Cook–Reckhow propositional proof system, is a system for proving classical propositional tautologies.

In graph theory and circuit complexity, the Tardos function is a graph invariant introduced by Éva Tardos in 1988 that has the following properties:

In computational complexity theory, NP/poly is a complexity class, a non-uniform analogue of the class NP of problems solvable in polynomial time by a non-deterministic Turing machine. It is the non-deterministic complexity class corresponding to the deterministic class P/poly.

In cryptography, indistinguishability obfuscation is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, IO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable.

References