In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.
With respect to computational resources, asymptotic time complexity and asymptotic space complexity are commonly estimated. Other asymptotically estimated behavior include circuit complexity and various measures of parallel computation, such as the number of (parallel) processors.
Since the ground-breaking 1965 paper by Juris Hartmanis and Richard E. Stearns [1] and the 1979 book by Michael Garey and David S. Johnson on NP-completeness, [2] the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity.
Further, unless specified otherwise, the term "computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g.. Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(n log n)).
A further tacit assumption is that the worst case analysis of computational complexity is in question unless stated otherwise. An alternative approach is probabilistic analysis of algorithms.
In most practical cases deterministic algorithms or randomized algorithms are discussed, although theoretical computer science also considers nondeterministic algorithms and other advanced models of computation.
In computer science, the analysis of algorithms is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes or the number of storage locations it uses. An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the efficiency of other algorithms which require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. More formally, the output of any sorting algorithm must satisfy two conditions:
In computer science, algorithmic efficiency is a property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
Combinatorial game theory has several ways of measuring game complexity. This article describes five of them: state-space complexity, game tree size, decision complexity, game-tree complexity, and computational complexity.
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:
In the mathematical discipline of graph theory, a vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The problem of finding a minimum vertex cover is a classical optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory.
In computer science, the iterated logarithm of , written log* , is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation:
L-notation is an asymptotic notation analogous to big-O notation, denoted as for a bound variable tending to infinity. Like big-O notation, it is usually used to roughly convey the computational complexity of a particular algorithm.
In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input — but not necessarily in the length of the input, which is the case for polynomial time algorithms.
In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.
In computer science, more specifically computational complexity theory, Computers and Intractability: A Guide to the Theory of NP-Completeness is an influential textbook by Michael Garey and David S. Johnson. It was the first book exclusively on the theory of NP-completeness and computational intractability. The book features an appendix providing a thorough compendium of NP-complete problems. The book is now outdated in some respects as it does not cover more recent development such as the PCP theorem. It is nevertheless still in print and is regarded as a classic: in a 2006 study, the CiteSeer search engine listed the book as the most cited reference in computer science literature.
In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly, such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is called NP, an abbreviation for "nondeterministic polynomial time". A problem is said to be NP-hard if everything in NP can be transformed in polynomial time into it, and a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If any NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted by NP-C or NPC.
A galactic algorithm is one that runs faster than any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. An example is an algorithm that is the fastest known way to multiply two numbers, provided the numbers have at least 10214857091104455251940635045059417341952 digits. Since this requires (vastly) more digits than there are atoms in the universe, this algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan, as they will never be used on any of the merely terrestrial data sets we find here on Earth.