In computational complexity theory and cryptography, averaging argument is a standard argument for proving theorems. It usually allows us to convert probabilistic polynomial-time algorithms into non-uniform polynomial-size circuits.
Example: If every person likes at least 1/3 of the books in a library, then the library has a book, which at least 1/3 of people like.
Proof: Suppose there are people and books. Each person likes at least of the books. Let people leave a mark on the book they like. Then, there will be at least marks. The averaging argument claims that there exists a book with at least marks on it. Assume, to the contradiction, that no such book exists. Then, every book has fewer than marks. However, since there are books, the total number of marks will be fewer than , contradicting the fact that there are at least marks.
Let X and Y be sets, let p be a predicate on X × Y and let f be a real number in the interval [0, 1]. If for each x in X and at least f |Y| of the elements y in Y satisfy p(x, y), then there exists a y in Y such that there exist at least f |X| elements x in X that satisfy p(x, y).
There is another definition, defined using the terminology of probability theory. [1]
Let be some function. The averaging argument is the following claim: if we have a circuit such that with probability at least , where is chosen at random and is chosen independently from some distribution over (which might not even be efficiently sampleable) then there exists a single string such that .
Indeed, for every define to be then
and then this reduces to the claim that for every random variable , if then (this holds since is the weighted average of and clearly if the average of some values is at least then one of the values must be at least ).
This argument has wide use in complexity theory (e.g. proving ) and cryptography (e.g. proving that indistinguishable encryption results in semantic security). A plethora of such applications can be found in Goldreich's books. [2] [3] [4]
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function f(z) has a root at w, then f(z) / , taking the limit value at w, is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
In computability theory, a primitive recursive function is roughly speaking a function that can be computed by a computer program whose loops are all "for" loops. Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.
In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics can not be reproduced by local hidden-variable theories. Experimental verification of violation of the inequalities is seen as experimental confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Bell's original inequality, is a constraint on the statistical occurrence of "coincidences" in a Bell test which is necessarily true if there exist underlying local hidden variables, an assumption that is sometimes termed local realism. On the other hand, the inequality is routinely violated by modern experiments in quantum mechanics.
In mathematics, more specifically in functional analysis, a positive linear functional on an ordered vector space is a linear functional on so that for all positive elements that is it holds that
In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is n−1/2, where n is the sample size, and the constant is estimated in terms of the third absolute normalized moments.
In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.
In mathematics, a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in are called double integrals, and integrals of a function of three variables over a region in are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.
In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
The Schröder–Bernstein theorem from set theory has analogs in the context operator algebras. This article discusses such operator-algebraic results.
In the mathematical theory of conformal and quasiconformal mappings, the extremal length of a collection of curves is a measure of the size of that is invariant under conformal mappings. More specifically, suppose that is an open set in the complex plane and is a collection of paths in and is a conformal mapping. Then the extremal length of is equal to the extremal length of the image of under . One also works with the conformal modulus of , the reciprocal of the extremal length. The fact that extremal length and conformal modulus are conformal invariants of makes them useful tools in the study of conformal and quasi-conformal mappings. One also works with extremal length in dimensions greater than two and certain other metric spaces, but the following deals primarily with the two dimensional setting.
Learning with errors (LWE) is the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.
Ahlfors theory is a mathematical theory invented by Lars Ahlfors as a geometric counterpart of the Nevanlinna theory. Ahlfors was awarded one of the two very first Fields Medals for this theory in 1936.
In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system.
The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert Schapire and Linda Sellie in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant.
Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem is hard in a worst-case scenario.
In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.