In computational complexity theory, the decision tree model is the model of computation in which an algorithm can be considered to be a decision tree, i.e. a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.
Typically, these tests have a small number of outcomes (such as a yes–no question) and can be performed quickly (say, with unit computational cost), so the worst-case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding tree. This notion of computational complexity of a problem or an algorithm in the decision tree model is called its decision tree complexity or query complexity.
Decision tree models are instrumental in establishing lower bounds for the complexity of certain classes of computational problems and algorithms. Several variants of decision tree models have been introduced, depending on the computational model and type of query algorithms are allowed to perform.
For example, a decision tree argument is used to show that a comparison sort of items must make comparisons. For comparison sorts, a query is a comparison of two items , with two outcomes (assuming no items are equal): either or . Comparison sorts can be expressed as decision trees in this model, since such sorting algorithms only perform these types of queries.
Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson. [1]
For example, many sorting algorithms are comparison sorts, which means that they only gain information about an input sequence via local comparisons: testing whether , , or . Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: is ?
These algorithms can be modeled as binary decision trees, where the queries are comparisons: an internal node corresponds to a query, and the node's children correspond to the next query when the answer to the question is yes or no. For leaf nodes, the output corresponds to a permutation that describes how the input sequence was scrambled from the fully ordered list of items. (The inverse of this permutation, , re-orders the input sequence.)
One can show that comparison sorts must use comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation of elements; otherwise, the algorithm would fail for that particular permutation as input. So, its corresponding decision tree must have at least as many leaves as permutations: leaves. Any binary tree with at least leaves has depth at least , so this is a lower bound on the run time of a comparison sorting algorithm. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as mergesort and heapsort, demonstrates that the bound is tight. [2] : 91
This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree. In essence, this is a rephrasing of the information-theoretic argument that a correct sorting algorithm must learn at least bits of information about the input sequence. As a result, this also works for randomized decision trees as well.
Other decision tree lower bounds do use that the query is a comparison. For example, consider the task of only using comparisons to find the smallest number among numbers. Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison. So, it takes at least comparisons to find the minimum. (The information-theoretic argument here only gives a lower bound of .) A similar argument works for general lower bounds for computing order statistics. [2] : 214
Linear decision trees generalize the above comparison decision trees to computing functions that take real vectors as input. The tests in linear decision trees are linear functions: for a particular choice of real numbers , output the sign of . (Algorithms in this model can only depend on the sign of the output.) Comparison trees are linear decision trees, because the comparison between and corresponds to the linear function . From its definition, linear decision trees can only specify functions whose fibers can be constructed by taking unions and intersections of half-spaces.
Algebraic decision trees are a generalization of linear decision trees that allow the test functions to be polynomials of degree . Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane).
These decision tree models, defined by Rabin [3] and Reingold, [4] are often used for proving lower bounds in computational geometry. [5] For example, Ben-Or showed that element uniqueness (the task of computing , where is 0 if and only if there exist distinct coordinates such that ) requires an algebraic decision tree of depth . [6] This was first showed for linear decision models by Dobkin and Lipton. [7] They also show a lower bound for linear decision trees on the knapsack problem, generalized to algebraic decision trees by Steele and Yao. [8]
For Boolean decision trees, the task is to compute the value of an n-bit Boolean function for an input . The queries correspond to reading a bit of the input, , and the output is . Each query may be dependent on previous queries. There are many types of computational models using decision trees that could be considered, admitting multiple complexity notions, called complexity measures.
If the output of a decision tree is , for all , the decision tree is said to "compute" . The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained. , the deterministic decision tree complexity of is the smallest depth among all deterministic decision trees that compute .
One way to define a randomized decision tree is to add additional nodes to the tree, each controlled by a probability . Another equivalent definition is to define it as a distribution over deterministic decision trees. Based on this second definition, the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution. is defined as the complexity of the lowest-depth randomized decision tree whose result is with probability at least for all (i.e., with bounded 2-sided error).
is known as the Monte Carlo randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. The Las Vegas decision-tree complexity measures the expected depth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version which is denoted by .
The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function. It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty.
Formally, the certificate complexity of at is the size of the smallest subset of indices such that, for all , if for all , then . The certificate complexity of is the maximum certificate complexity over all . The analogous notion where one only requires the verifier to be correct with 2/3 probability is denoted .
The quantum decision tree complexity is the depth of the lowest-depth quantum decision tree that gives the result with probability at least for all . Another quantity, , is defined as the depth of the lowest-depth quantum decision tree that gives the result with probability 1 in all cases (i.e. computes exactly). and are more commonly known as quantum query complexities, because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we define and .
These notions are typically bounded by the notions of degree and approximate degree. The degree of , denoted , is the smallest degree of any polynomial satisfying for all . The approximate degree of , denoted , is the smallest degree of any polynomial satisfying whenever and whenever .
Beals et al. established that and . [9]
It follows immediately from the definitions that for all -bit Boolean functions ,, and . Finding the best upper bounds in the converse direction is a major goal in the field of query complexity.
All of these types of query complexity are polynomially related. Blum and Impagliazzo, [10] Hartmanis and Hemachandra, [11] and Tardos [12] independently discovered that . Noam Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity: . [13] (Nisan also showed that .) A tighter relationship is known between the Monte Carlo and Las Vegas models: . [14] This relationship is optimal up to polylogarithmic factors. [15] As for quantum decision tree complexities, , and this bound is tight. [16] [15] Midrijanis showed that , [17] [18] improving a quartic bound due to Beals et al. [9]
It is important to note that these polynomial relationships are valid only for total Boolean functions. For partial Boolean functions, that have a domain a subset of , an exponential separation between and is possible; the first example of such a problem was discovered by Deutsch and Jozsa.
For a Boolean function , the sensitivity of is defined to be the maximum sensitivity of over all , where the sensitivity of at is the number of single-bit changes in that change the value of . Sensitivity is related to the notion of total influence from the analysis of Boolean functions, which is equal to average sensitivity over all .
The sensitivity conjecture is the conjecture that sensitivity is polynomially related to query complexity; that is, there exists exponent such that, for all , and . One can show through a simple argument that , so the conjecture is specifically concerned about finding a lower bound for sensitivity. Since all of the previously-discussed complexity measures are polynomially related, the precise type of complexity measure is not relevant. However, this is typically phrased as the question of relating sensitivity with block sensitivity.
The block sensitivity of , denoted , is defined to be the maximum block sensitivity of over all . The block sensitivity of at is the maximum number of disjoint subsets such that, for any of the subsets , flipping the bits of corresponding to changes the value of . [13]
In 2019, Hao Huang proved the sensitivity conjecture, showing that . [19] [20]
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.
In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties each receive a -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them.
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public. Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins.
In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine that decides problem given an oracle for . It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
In mathematics, the degree of a polynomial is the highest of the degrees of the polynomial's monomials with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. For a univariate polynomial, the degree of the polynomial is simply the highest exponent occurring in the polynomial. The term order has been used as a synonym of degree but, nowadays, may refer to several other concepts.
Richard Jay Lipton is an American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology. He has worked in computer science theory, cryptography, and DNA computing.
In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average".
A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.
The exponential mechanism is a technique for designing differentially private algorithms. It was developed by Frank McSherry and Kunal Talwar in 2007. Their work was recognized as a co-winner of the 2009 PET Award for Outstanding Research in Privacy Enhancing Technologies.
A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.
In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated.
In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.
In computational complexity, the sensitivity theorem, proved by Hao Huang in 2019, states that the sensitivity of a Boolean function is at least the square root of its degree, thus settling a conjecture posed by Nisan and Szegedy in 1992. The proof is notably succinct, given that prior progress had been limited.
{{cite book}}
: CS1 maint: others (link)