Proper complexity function

Last updated

A proper complexity function is a function f mapping a natural number to a natural number such that:

Natural number A kind of number, used for counting

In mathematics, the natural numbers are those used for counting and ordering. In common mathematical terminology, words colloquially used for counting are "cardinal numbers" and words connected to ordering represent "ordinal numbers". The natural numbers can, at times, appear as a convenient set of codes ; that is, as what linguists call nominal numbers, foregoing many or all of the properties of being a number in a mathematical sense.

Turing machine Rule based abstract computation model

A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

If f and g are two proper complexity functions, then f + g, fg, and 2f, are also proper complexity functions.

Similar notions include honest function, space-constructible function, and time-constructible function.

[1]

Related Research Articles

Algebraic geometry branch of mathematics

Algebraic geometry is a branch of mathematics, classically studying zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, mainly from commutative algebra, for solving geometrical problems about these sets of zeros.

In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

Big O notation notation to describe the limiting behavior of a function

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.

In computational complexity theory, the complexity class EXPTIME is the set of all decision problems that have exponential runtime, i.e., that are solvable by a deterministic Turing machine in O(2p) time, where p(n) is a polynomial function of n.

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

In mathematics, a sheaf is a tool for systematically tracking locally defined data attached to the open sets of a topological space. The data can be restricted to smaller open sets, and the data assigned to an open set is equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original one. For example, such data can consist of the rings of continuous or smooth real-valued functions defined on each open set. Sheaves are by design quite general and abstract objects, and their correct definition is rather technical. They are variously defined, for example, as sheaves of sets or sheaves of rings, depending on the type of data assigned to open sets.

In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient of a function for it to be called one-way.

Time complexity An estimate of time taken for running an algorithm

In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured by the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class NTIME(f ) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f ). Here O is the big O notation, f is some function, and n is the size of the input.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class ELEMENTARY of elementary recursive functions is the union of the classes

In complexity theory, a time-constructible function is a function f from natural numbers to natural numbers with the property that f(n) can be constructed from n by a Turing machine in the time of order f(n). The purpose of such a definition is to exclude functions that do not provide an upper bound on the runtime of some Turing machine.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithm, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the μ-recursive functions.

In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed is typically a short binary string drawn from the uniform distribution.

In computational complexity theory the Blum axioms or Blum complexity axioms are axioms that specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967.

In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions which places it in a computability theory setting. There are several horse Wang's variations of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object. Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 but effective dimension 1.

References

  1. Alexei Myasnikov, Vladimir Shpilrain, Alexander Ushakov. Group-based Cryptography. Birkhäuser Verlag, 2008, p.28