In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.
A functional problem is defined by a relation over strings of an arbitrary alphabet :
An algorithm solves if for every input such that there exists a satisfying , the algorithm produces one such , and if there are no such , it rejects.
A promise function problem is allowed to do anything (thus may not terminate) if no such exists.
A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows:
In this case the relation is given by tuples of suitably encoded boolean formulas and satisfying assignments. While a SAT algorithm, fed with a formula , only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case.
Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors.
Consider an arbitrary decision problem in the class NP. By the definition of NP, each problem instance that is answered 'yes' has a polynomial-size certificate which serves as a proof for the 'yes' answer. Thus, the set of these tuples forms a relation, representing the function problem "given in , find a certificate for ". This function problem is called the function variant of ; it belongs to the class FNP .
FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) verified, but not necessarily efficiently found. In contrast, the class FP , which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time.
Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula is satisfiable. After that the algorithm can fix variable to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps fixed to TRUE and continues to fix , otherwise it decides that has to be FALSE and continues. Thus, FSAT is solvable in polynomial time using an oracle deciding SAT. In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured [ by whom? ] that the integer factorization problem is not self-reducible, because deciding whether an integer is prime is in P (easy), [1] while the integer factorization problem is believed to be hard for a classical computer. There are several (slightly different) notions of self-reducibility. [2] [3] [4]
Function problems can be reduced much like decision problems: Given function problems and we say that reduces to if there exists polynomially-time computable functions and such that for all instances of and possible solutions of , it holds that
It is therefore possible to define FNP-complete problems analogous to the NP-complete problem:
A problem is FNP-complete if every problem in FNP can be reduced to . The complexity class of FNP-complete problems is denoted by FNP-C or FNPC. Hence the problem FSAT is also an FNP-complete problem, and it holds that if and only if .
The relation used to define function problems has the drawback of being incomplete: Not every input has a counterpart such that . Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP. This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist. In addition, if TFNP contains any FNP-complete problem it follows that .
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.
Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then
MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:
In physics, the Green's function for Laplace's equation in three variables is used to describe the response of a particular type of physical system to a point source. In particular, this Green's function arises in systems that can be described by Poisson's equation, a partial differential equation (PDE) of the form
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions . There are two kinds: the regular solid harmonics, which are well-defined at the origin and the irregular solid harmonics, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
In computational complexity theory, the complexity class PPP is a subclass of TFNP. It is the class of search problems that can be shown to be total by an application of the pigeonhole principle. Christos Papadimitriou introduced it in the same paper that introduced PPAD and PPA. PPP contains both PPAD and PWPP as subclasses. These complexity classes are of particular interest in cryptography because they are strongly related to cryptographic primitives such as one-way permutations and collision-resistant hash functions.
The Mehler kernel is a complex-valued function found to be the propagator of the quantum harmonic oscillator.
In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
In algebraic geometry, the main theorem of elimination theory states that every projective scheme is proper. A version of this theorem predates the existence of scheme theory. It can be stated, proved, and applied in the following more classical setting. Let k be a field, denote by the n-dimensional projective space over k. The main theorem of elimination theory is the statement that for any n and any algebraic variety V defined over k, the projection map sends Zariski-closed subsets to Zariski-closed subsets.
This article includes a list of general references, but it lacks sufficient corresponding inline citations .(October 2015) |