This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(December 2017) |
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. [1] Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.
The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.
As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function n → f(n), where n is the size of the input and f(n) is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size n) or the average-case complexity (the average of the amount of resources over all inputs of size n). Time complexity is generally expressed as the number of required elementary operations on an input of size n, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size n.
The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity.
The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on any computer. This is achieved by counting the number of elementary operations that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called steps.
Formally, the bit complexity refers to the number of operations on bits that are needed for running an algorithm. With most models of computation, it equals the time complexity up to a constant factor. On computers, the number of operations on machine words that are needed is also proportional to the bit complexity. So, the time complexity and the bit complexity are equivalent for realistic models of computation.
Another important resource is the size of computer memory that is needed for running algorithms.
For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties.
The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.
For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a n×n integer matrix is for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in n, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to O~(n4) .
In sorting and searching, the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized.
It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size n (in bits) of the input, and therefore, the complexity is a function of n. However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used.
The worst-case complexity is the maximum of the complexity over all inputs of size n, and the average-case complexity is the average of the complexity over all inputs of size n (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered.
It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of n, and this makes that, for small n, the ease of implementation is generally more interesting than a low complexity.
For these reasons, one generally focuses on the behavior of the complexity for large n, that is on its asymptotic behavior when n tends to the infinity. Therefore, the complexity is generally expressed by using big O notation.
For example, the usual algorithm for integer multiplication has a complexity of this means that there is a constant such that the multiplication of two integers of at most n digits may be done in a time less than This bound is sharp in the sense that the worst-case complexity and the average-case complexity are which means that there is a constant such that these complexities are larger than The radix does not appear in these complexity, as changing of radix changes only the constants and
The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as integer multiplication in time that the explicit definition of the model of computation is required for proofs.
A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers.
When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence.
In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (as of March 2018 [update] : 21 = 3 × 7).
Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. As of 2017 [update] it is generally conjectured that P ≠ NP, with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input.
Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower.
The time needed for a computation on N processors is at least the quotient by N of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor.
The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor.
A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer.
Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.
The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem[ citation needed ], including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems.
It follows that every complexity of an algorithm, that is expressed with big O notation, is also an upper bound on the complexity of the corresponding problem.
On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.
For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity
The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of n polynomial equations of degree d in n indeterminates may have up to complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is For this problem, an algorithm of complexity is known, which may thus be considered as asymptotically quasi-optimal.
A nonlinear lower bound of is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is This lower bound results from the fact that there are n! ways of ordering n objects. As each comparison splits in two parts this set of n! orders, the number of N of comparisons that are needed for distinguishing all orders must verify which implies by Stirling's formula.
A standard method for getting lower bounds of complexity consists of reducing a problem to another problem. More precisely, suppose that one may encode a problem A of size n into a subproblem of size f(n) of a problem B, and that the complexity of A is Without loss of generality, one may suppose that the function f increases with n and has an inverse function h. Then the complexity of the problem B is This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is for every positive integer k.
Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected.
It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For n = 1,000,000, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.
In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties:
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time, where n is the input length.
The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.
In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .
In computational complexity theory, the element distinctness problem or element uniqueness problem is the problem of determining whether all the elements of a list are distinct.
In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree, i.e., a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.
Random-access Turing machines (RATMs) represent a pivotal computational model in theoretical computer science, especially critical in the study of tractability within big data computing scenarios. Diverging from the sequential memory access limitations of conventional Turing machines, RATMs introduce the capability for random access to memory positions. This advancement is not merely a technical enhancement but a fundamental shift in computational paradigms, aligning more closely with the memory access patterns of modern computing systems. The inherent ability of RATMs to access any memory cell in a constant amount of time significantly bolsters computational efficiency, particularly for problem sets where data size and access speed are critical factors. Furthermore, the conceptual evolution of RATMs from traditional Turing machines marks a significant leap in the understanding of computational processes, providing a more realistic framework for analyzing algorithms that handle the complexities of large-scale data. This transition from a sequential to a random-access paradigm not only mirrors the advancements in real-world computing systems but also underscores the growing relevance of RATMs in addressing the challenges posed by big data applications.
In computer science, a polynomial-time algorithm is – generally speaking – an algorithm whose running time is upper-bounded by some polynomial function of the input size. The definition naturally depends on the computational model, which determines how the running time is measured, and how the input size is measured. Two prominent computational models are the Turing-machine model and the arithmetic model. A strongly-polynomial time algorithm is polynomial in both models, whereas a weakly-polynomial time algorithm is polynomial only in the Turing machine model.