In cryptography, black-box obfuscation was a proposed cryptographic primitive which would allow a computer program to be obfuscated in a way such that it was impossible to determine anything about it except its input and output behavior. [1] Black-box obfuscation has been proven to be impossible, even in principle. [2]
Barak et al. constructed a family of unobfuscatable programs, for which an efficient attacker can always learn more from any obfuscated code than from black-box access. [2] [3]
Broadly, they start by engineering a special pair of programs that cannot be obfuscated together. For some randomly selected strings of a fixed, pre-determined length , define one program to be one that computes
and the other program to one that computes
(Here, interprets its input as the code for a Turing machine. The second condition in the definition of is to prevent the function from being uncomputable.)
If an efficient attacker only has black-box access, Barak et al. argued, then the attacker only has an exponentially small chance of guessing the password , and so cannot distinguish the pair of programs from a pair where is replaced by some program that always outputs "0". However, if the attacker has access to any obfuscated implementations of , then the attacker will find with probability 1, whereas the attacker will always find unless (which should happen with negligible probability). This means that the attacker can always distinguish the pair from the pair with obfuscated code access, but not black-box access. Since no obfuscator can prevent this attack, Barak et al. conclude that no black-box obfuscator for pairs of programs exists. [2] [3]
To conclude the argument, Barak et al. define a third program to implement the functionality of the two previous:
Since equivalently efficient implementations of can be recovered from one of by hardwiring the value of , Barak et al. conclude that cannot be obfuscated either, which concludes their argument. [2]
In their paper, Barak et al. also prove the following (conditional to appropriate cryptographic assumptions): [2]
In their original paper exploring black-box obfuscation, Barak et al. defined two weaker notions of cryptographic obfuscation which they did not rule out: indistinguishability obfuscation and extractability obfuscation (which they called "differing-inputs obfuscation".) Informally, an indistinguishability obfuscator should convert input programs with the same functionality into output programs such that the outputs cannot be efficiently related to the inputs by a bounded attacker, and an extractability obfuscator should be an obfuscator such that if the efficient attacker could relate the outputs to the inputs for any two programs, then the attacker could also produce an input such that the two programs being obfuscated produce different outputs. (Note that an extractability obfuscator is necessarily an indistinguishability obfuscator.) [2] [4]
As of 2020 [update] , a candidate implementation of indistinguishability obfuscation is under investigation. [5] In 2013, Boyle et al. explored several candidate implementations of extractability obfuscation. [4]
A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter.
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.
In economics and econometrics, the Cobb–Douglas production function is a particular functional form of the production function, widely used to represent the technological relationship between the amounts of two or more inputs and the amount of output that can be produced by those inputs. The Cobb–Douglas form was developed and tested against statistical evidence by Charles Cobb and Paul Douglas between 1927 and 1947; according to Douglas, the functional form itself was developed earlier by Philip Wicksteed.
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.
In group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm for computing the discrete logarithm or order of an element in a finite abelian group by Daniel Shanks. The discrete log problem is of fundamental importance to the area of public key cryptography.
Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.
In mathematics, Riemann's differential equation, named after Bernhard Riemann, is a generalization of the hypergeometric differential equation, allowing the regular singular points to occur anywhere on the Riemann sphere, rather than merely at 0, 1, and . The equation is also known as the Papperitz equation.
In cryptography, higher-order differential cryptanalysis is a generalization of differential cryptanalysis, an attack used against block ciphers. While in standard differential cryptanalysis the difference between only two texts is used, higher-order differential cryptanalysis studies the propagation of a set of differences between a larger set of texts. Xuejia Lai, in 1994, laid the groundwork by showing that differentials are a special case of the more general case of higher order derivates. Lars Knudsen, in the same year, was able to show how the concept of higher order derivatives can be used to mount attacks on block ciphers. These attacks can be superior to standard differential cryptanalysis. Higher-order differential cryptanalysis has notably been used to break the KN-Cipher, a cipher which had previously been proved to be immune against standard differential cryptanalysis.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In cryptography, the socialist millionaire problem is one in which two millionaires want to determine if their wealth is equal without disclosing any information about their riches to each other. It is a variant of the Millionaire's Problem whereby two millionaires wish to compare their riches to determine who has the most wealth without disclosing any information about their riches to each other.
Combinatory categorial grammar (CCG) is an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures and is therefore a type of phrase structure grammar.
An alpha beta filter is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to linear state observers used in control theory. Its principal advantage is that it does not require a detailed system model.
In computational learning theory, Occam learning is a model of algorithmic learning where the objective of the learner is to output a succinct representation of received training data. This is closely related to probably approximately correct (PAC) learning, where the learner is evaluated on its predictive power of a test set.
In machine learning, local case-control sampling is an algorithm used to reduce the complexity of training a logistic regression classifier. The algorithm reduces the training complexity by selecting a small subsample of the original dataset for training. It assumes the availability of a (unreliable) pilot estimation of the parameters. It then performs a single pass over the entire dataset using the pilot estimation to identify the most "surprising" samples. In practice, the pilot may come from prior knowledge or training using a subsample of the dataset. The algorithm is most effective when the underlying dataset is imbalanced. It exploits the structures of conditional imbalanced datasets more efficiently than alternative methods, such as case control sampling and weighted case control sampling.
In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated.
In cryptography, indistinguishability obfuscation is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, iO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable.
Quantum secret sharing (QSS) is a quantum cryptographic scheme for secure communication that extends beyond simple quantum key distribution. It modifies the classical secret sharing (CSS) scheme by using quantum information and the no-cloning theorem to attain the ultimate security for communications.
In cryptography, the branch number is a numerical value that characterizes the amount of diffusion introduced by a vectorial Boolean function F that maps an input vector a to output vector . For the (usual) case of a linear F the value of the differential branch number is produced by: