This article includes a list of general references, but it lacks sufficient corresponding inline citations .(October 2023) |
Black box systems | |
---|---|
System | |
Black box, Oracle machine | |
Methods and techniques | |
Black-box testing, Blackboxing | |
Related techniques | |
Feed forward, Obfuscation, Pattern recognition, White box, White-box testing, Gray-box testing, System identification | |
Fundamentals | |
A priori information, Control systems, Open systems, Operations research, Thermodynamic systems | |
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used.
An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a given computational problem:
An oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle to obtain a solution to any instance of the computational problem for that oracle. For example, if the problem is a decision problem for a set A of natural numbers, the oracle machine supplies the oracle with a natural number, and the oracle responds with "yes" or "no" stating whether that number is an element of A.
There are many equivalent definitions of oracle Turing machines, as discussed below. The one presented here is from van Melkebeek (2003 , p. 43).
An oracle machine, like a Turing machine, includes:
In addition to these components, an oracle machine also includes:
From time to time, the oracle machine may enter the ASK state. When this happens, the following actions are performed in a single computational step:
The effect of changing to the ASK state is thus to receive, in a single step, a solution to the problem instance that is written on the oracle tape.
There are many alternative definitions to the one presented above. Many of these are specialized for the case where the oracle solves a decision problem. In this case:
These definitions are equivalent from the point of view of Turing computability: a function is oracle-computable from a given oracle under all of these definitions if it is oracle-computable under any of them. The definitions are not equivalent, however, from the point of view of computational complexity. A definition such as the one by van Melkebeek, using an oracle tape which may have its own alphabet, is required in general.
The complexity class of decision problems solvable by an algorithm in class A with an oracle for a language L is called AL. For example, PSAT is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for the Boolean satisfiability problem. The notation AB can be extended to a set of languages B (or a complexity class B), by using the following definition:
When a language L is complete for some class B, then AL=AB provided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT is NP-complete with respect to polynomial time reductions, PSAT=PNP. However, if A = DLOGTIME, then ASAT may not equal ANP. (The definition of given above is not completely standard. In some contexts, such as the proof of the time and space hierarchy theorems, it is more useful to assume that the abstract machine defining class only has access to a single oracle for one language. In this context, is not defined if the complexity class does not have any complete problems with respect to the reductions available to .)
It is understood that NP ⊆ PNP, but the question of whether NPNP, PNP, NP, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of the polynomial hierarchy.
Oracle machines are useful for investigating the relationship between complexity classes P and NP, by considering the relationship between PA and NPA for an oracle A. In particular, it has been shown there exist languages A and B such that PA=NPA and PB≠NPB. [4] The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique that relativizes (i.e., unaffected by the addition of an oracle) will not answer the P = NP question. [5] Most proof techniques relativize. [6]
One may consider the case where an oracle is chosen randomly from among all possible oracles (an infinite set). It has been shown in this case, that with probability 1, PA≠NPA. [7] When a question is true for almost all oracles, it is said to be true for a random oracle . This choice of terminology is justified by the fact that random oracles support a statement with probability 0 or 1 only. (This follows from Kolmogorov's zero–one law.) This is only weak evidence that P≠NP, since a statement may be true for a random oracle but false for ordinary Turing machines;[ original research? ] for example, IPA≠PSPACEA for a random oracle A but IP = PSPACE. [8]
A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but it cannot determine, in general, whether machines equivalent to itself will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem. This hierarchy of machines can be used to define the arithmetical hierarchy . [9]
In cryptography, oracles are used to make arguments for the security of cryptographic protocols where a hash function is used. A security reduction for the protocol is given in the case where, instead of a hash function, a random oracle answers each query randomly but consistently; the oracle is assumed to be available to all parties including the attacker, as the hash function is. Such a proof shows that unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle).
In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.
Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.
In computability theory and computational complexity theory, a many-one reduction is a reduction that converts instances of one decision problem to another decision problem using a computable function. The reduced instance is in the language if and only if the initial instance is in its language . Thus if we can decide whether instances are in the language , we can decide whether instances are in its language by applying the reduction and solving for . Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that reduces to if, in layman's terms is at least as hard to solve as . This means that any algorithm that solves can also be used as part of a program that solves .
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.
In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.
In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine that decides problem given an oracle for . It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems.
In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.
Generic-case complexity is a subfield of computational complexity theory that studies the complexity of computational problems on "most inputs".
{{cite book}}
: CS1 maint: DOI inactive as of November 2024 (link)