Transcomputational problem

Last updated

In computational complexity theory, a transcomputational problem is a problem that requires processing of more than 1093 bits of information. [1] Any number greater than 1093 is called a transcomputational number. The number 1093, called Bremermann's limit, is, according to Hans-Joachim Bremermann, the total number of bits processed by a hypothetical computer the size of the Earth within a time period equal to the estimated age of the Earth. [1] [2] The term transcomputational was coined by Bremermann. [3]

Contents

Examples

Testing integrated circuits

Exhaustively testing all combinations of an integrated circuit with 309 boolean inputs and 1 output requires testing of a total of 2309 combinations of inputs. Since the number 2309 is a transcomputational number (that is, a number greater than 1093), the problem of testing such a system of integrated circuits is a transcomputational problem. This means that there is no way one can verify the correctness of the circuit for all combinations of inputs through brute force alone. [1] [4]

Pattern recognition

Consider a q×q array of the chessboard type, each square of which can have one of k colors. Altogether there are kn color patterns, where n = q2. The problem of determining the best classification of the patterns, according to some chosen criterion, may be solved by a search through all possible color patterns. For two colors, such a search becomes transcomputational when the array is 18×18 or larger. For a 10×10 array, the problem becomes transcomputational when there are 9 or more colors. [1]

This has some relevance in the physiological studies of the retina. The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10300,000 bits of information. This is far beyond Bremermann's limit. [1]

General systems problems

A system of n variables, each of which can take k different states, can have kn possible system states. To analyze such a system, a minimum of kn bits of information are to be processed. The problem becomes transcomputational when kn > 1093. This happens for the following values of k and n: [1]

k2345678910
n3081941541331191101029793

Implications

The existence of real-world transcomputational problems implies the limitations of computers as data processing tools. This point is best summarized in Bremermann's own words: [2]

"The experiences of various groups who work on problem solving, theorem proving and pattern recognition all seem to point in the same direction: These problems are tough. There does not seem to be a royal road or a simple method which at one stroke will solve all our problems. My discussion of ultimate limitations on the speed and amount of data processing may be summarized like this: Problems involving vast numbers of possibilities will not be solved by sheer data processing quantity. We must look for quality, for refinements, for tricks, for every ingenuity that we can think of. Computers faster than those of today will be a great help. We will need them. However, when we are concerned with problems in principle, present day computers are about as fast as they ever will be.
We may expect that the technology of data processing will proceed step by step – just as ordinary technology has done. There is an unlimited challenge for ingenuity applied to specific problems. There is also an unending need for general notions and theories to organize the myriad details."

See also

Related Research Articles

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

<span class="mw-page-title-main">RGB color model</span> Color model based on red, green and blue

The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.

<span class="mw-page-title-main">Sorting algorithm</span> Algorithm that arranges lists in order

In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.

<span class="mw-page-title-main">Analog-to-digital converter</span> System that converts an analog signal into a digital signal

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an analog input voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

<span class="mw-page-title-main">Brute-force attack</span> Cryptanalytic method for unauthorized users to access data

In cryptography, a brute-force attack consists of an attacker submitting many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search.

<span class="mw-page-title-main">Dynamic random-access memory</span> Type of computer memory

Dynamic random-access memory is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory, since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

In computer science, a lookup table (LUT) is an array that replaces runtime computation with a simpler array indexing operation. The process is termed as "direct addressing" and LUTs differ from hash tables in a way that, to retrieve a value with key , a hash table would store the value in the slot where is a hash function i.e. is used to compute the slot, while in the case of LUT, the value is stored in slot , thus directly addressable. The savings in processing time can be significant, because retrieving a value from memory is often faster than carrying out an "expensive" computation or input/output operation. The tables may be precalculated and stored in static program storage, calculated as part of a program's initialization phase (memoization), or even stored in hardware in application-specific platforms. Lookup tables are also used extensively to validate input values by matching against a list of valid items in an array and, in some programming languages, may include pointer functions to process the matching input. FPGAs also make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality.

David Courtenay Marr was a British neuroscientist and physiologist. Marr integrated results from psychology, artificial intelligence, and neurophysiology into new models of visual processing. His work was very influential in computational neuroscience and led to a resurgence of interest in the discipline.

In computer science, a parallel random-access machine is a shared-memory abstract machine. As its name indicates, the PRAM is intended as the parallel-computing analogy to the random-access machine (RAM). In the same way that the RAM is used by sequential-algorithm designers to model algorithmic performance, the PRAM is used by parallel-algorithm designers to model parallel algorithmic performance. Similar to the way in which the RAM model neglects practical issues, such as access time to cache memory versus main memory, the PRAM model neglects such issues as synchronization and communication, but provides any (problem-size-dependent) number of processors. Algorithm cost, for instance, is estimated using two parameters O(time) and O(time × processor_number).

<span class="mw-page-title-main">Dither</span> Noise that reduces quantization error

Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.

Bremermann's limit, named after Hans-Joachim Bremermann, is a limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe. It is derived from Einstein's mass-energy equivalency and the Heisenberg uncertainty principle, and is c2/h ≈ 1.3563925 × 1050 bits per second per kilogram.

The limits of computation are governed by a number of different factors. In particular, there are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy.

<span class="mw-page-title-main">Quicksort</span> Divide and conquer sorting algorithm

Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions.

In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

In computer science, an FM-index is a compressed full-text substring index based on the Burrows–Wheeler transform, with some similarities to the suffix array. It was created by Paolo Ferragina and Giovanni Manzini, who describe it as an opportunistic data structure as it allows compression of the input text while still permitting fast substring queries. The name stands for Full-text index in Minute space.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

Stochastic computing is a collection of techniques that represent continuous values by streams of random bits. Complex computations can then be computed by simple bit-wise operations on the streams. Stochastic computing is distinct from the study of randomized algorithms.

References

  1. 1 2 3 4 5 6 Klir, George J. (1991). Facets of systems science. Springer. pp. 121–128. ISBN   978-0-306-43959-9.
  2. 1 2 Bremermann, H.J. (1962) Optimization through evolution and recombination In: Self-Organizing systems 1962, edited M.C. Yovitts et al., Spartan Books, Washington, D.C. pp. 93–106.
  3. Heinz Muhlenbein. "Algorithms, data and hypotheses : Learning in open worlds" (PDF). German National Research Center for Computer Science. Retrieved 3 May 2011.
  4. Miles, William. "Bremermann's Limit" . Retrieved 1 May 2011. While the source uses 308 as the number of inputs, this number is based on an error: 2308 < 1093.