In computational complexity theory, and more specifically in the analysis of algorithms with integer data, the transdichotomous model is a variation of the random-access machine in which the machine word size is assumed to match the problem size. The model was proposed by Michael Fredman and Dan Willard, [1] who chose its name "because the dichotomy between the machine model and the problem size is crossed in a reasonable manner." [2]
In a problem such as integer sorting in which there are n integers to be sorted, the transdichotomous model assumes that each integer may be stored in a single word of computer memory, that operations on single words take constant time per operation, and that the number of bits that can be stored in a single word is at least log2n. The goal of complexity analysis in this model is to find time bounds that depend only on n and not on the actual size of the input values or the machine words. [3] [4] In modeling integer computation, it is necessary to assume that machine words are limited in size, because models with unlimited precision are unreasonably powerful (able to solve PSPACE-complete problems in polynomial time). [5] The transdichotomous model makes a minimal assumption of this type: that there is some limit, and that the limit is large enough to allow random-access indexing into the input data. [3]
As well as its application to integer sorting, the transdichotomous model has also been applied to the design of priority queues [6] and to problems in computational geometry [3] and graph algorithms. [7]
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.
In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .
In computer science, a fusion tree is a type of tree data structure that implements an associative array on w-bit integers on a finite universe, where each of the input integers has size less than 2w and is non-negative. When operating on a collection of n key–value pairs, it uses O(n) space and performs searches in O(logwn) time, which is asymptotically faster than a traditional self-balancing binary search tree, and also better than the van Emde Boas tree for large values of w. It achieves this speed by using certain constant-time operations that can be done on a machine word. Fusion trees were invented in 1990 by Michael Fredman and Dan Willard.
In computer science, the AF-heap is a type of priority queue for integer data, an extension of the fusion tree using an atomic heap proposed by M. L. Fredman and D. E. Willard.
In theoretical computer science, a computational problem is one that asks for a solution in terms of an algorithm. For example, the problem of factoring
In computer science, the range searching problem consists of processing a set S of objects, in order to determine which objects from S intersect with a query object, called the range. For example, if S is a set of points corresponding to the coordinates of several cities, find the subset of cities within a given range of latitudes and longitudes.
In computing, especially computational geometry, a real RAM is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation.
In the analysis of algorithms, several authors have studied the computation of the volume of high-dimensional convex bodies, a problem that can also be used to model many other problems in combinatorial enumeration. Often these works use a black box model of computation in which the input is given by a subroutine for testing whether a point is inside or outside of the convex body, rather than by an explicit listing of the vertices or faces of a convex polytope. It is known that, in this model, no deterministic algorithm can achieve an accurate approximation, and even for an explicit listing of faces or vertices the problem is #P-hard. However, a joint work by Martin Dyer, Alan M. Frieze and Ravindran Kannan provided a randomized polynomial time approximation scheme for the problem, providing a sharp contrast between the capabilities of randomized and deterministic algorithms.
Random-access Turing machines (RATMs) represent a pivotal computational model in theoretical computer science, especially critical in the study of tractability within big data computing scenarios. Diverging from the sequential memory access limitations of conventional Turing machines, RATMs introduce the capability for random access to memory positions. This advancement is not merely a technical enhancement but a fundamental shift in computational paradigms, aligning more closely with the memory access patterns of modern computing systems. The inherent ability of RATMs to access any memory cell in a constant amount of time significantly bolsters computational efficiency, particularly for problem sets where data size and access speed are critical factors. Furthermore, the conceptual evolution of RATMs from traditional Turing machines marks a significant leap in the understanding of computational processes, providing a more realistic framework for analyzing algorithms that handle the complexities of large-scale data. This transition from a sequential to a random-access paradigm not only mirrors the advancements in real-world computing systems but also underscores the growing relevance of RATMs in addressing the challenges posed by big data applications.
In computer science, integer sorting is the algorithmic problem of sorting a collection of data values by integer keys. Algorithms designed for integer sorting may also often be applied to sorting problems in which the keys are floating point numbers, rational numbers, or text strings. The ability to perform integer arithmetic on the keys allows integer sorting algorithms to be faster than comparison sorting algorithms in many cases, depending on the details of which operations are allowed in the model of computing and how large the integers to be sorted are.
Mihai Pătrașcu was a Romanian-American computer scientist at AT&T Labs in Florham Park, New Jersey, United States.
In computer science, the cell-probe model is a model of computation similar to the random-access machine, except that all operations are free except memory access. This model is useful for proving lower bounds of algorithms for data structure problems.
Dan Edward Willard was an American computer scientist and logician, and a professor of computer science at the University at Albany.
In computer science, sorting is the problem of sorting pairs of numbers by their sums. Applications of the problem include transit fare minimisation, VLSI design, and sparse polynomial multiplication. As with comparison sorting and integer sorting more generally, algorithms for this problem can be based only on comparisons of these sums, or on other operations that work only when the inputs are small integers.
In theoretical computer science, the word RAM model is a model of computation in which a random-access machine does arithmetic and bitwise operations on a word of w bits. Michael Fredman and Dan Willard created it in 1990 to simulate programming languages like C.
In computer science, the predecessor problem involves maintaining a set of items to, given an element, efficiently query which element precedes or succeeds that element in an order. Data structures used to solve the problem include balanced binary search trees, van Emde Boas trees, and fusion trees. In the static predecessor problem, the set of elements does not change, but in the dynamic predecessor problem, insertions into and deletions from the set are allowed.