Solomonoff's theory of inductive inference

Last updated

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. [1] [2] [3] In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Contents

Solomonoff proved that this induction is incomputable, but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction". [2]

Solomonoff's induction naturally formalizes Occam's razor [4] [5] [6] [7] [8] by assigning larger prior credences to theories that require a shorter algorithmic description.

Origin

Philosophical

The theory is based in philosophical foundations, and was founded by Ray Solomonoff around 1960. [9] It is a mathematically formalized combination of Occam's razor [4] [5] [6] [7] [8] and the Principle of Multiple Explanations. [10] All computable theories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Marcus Hutter's universal artificial intelligence builds upon this to calculate the expected value of an action.

Principle

Solomonoff's induction has been argued to be the computational formalization of pure Bayesianism. [3] To understand, recall that Bayesianism derives the posterior probability of a theory given data by applying Bayes rule, which yields

where theories are alternatives to theory . For this equation to make sense, the quantities and must be well-defined for all theories and . In other words, any theory must define a probability distribution over observable data . Solomonoff's induction essentially boils down to demanding that all such probability distributions be computable.

Interestingly, the set of computable probability distributions is a subset of the set of all programs, which is countable. Similarly, the sets of observable data considered by Solomonoff were finite. Without loss of generality, we can thus consider that any observable data is a finite bit string. As a result, Solomonoff's induction can be defined by only invoking discrete probability distributions.

Solomonoff's induction then allows to make probabilistic predictions of future data , by simply obeying the laws of probability. Namely, we have . This quantity can be interpreted as the average predictions of all theories given past data , weighted by their posterior credences .

Mathematical

The proof of the "razor" is based on the known mathematical properties of a probability distribution over a countable set. These properties are relevant because the infinite set of all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition of probability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for every > 0, there is some length l such that the probability of all programs longer than l is at most . This does not, however, preclude very long programs from having very high probability.

Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity. The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.

Mathematical guarantees

Solomonoff's completeness

The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by the Kolmogorov complexity of the (stochastic) data generating process. The errors can be measured using the Kullback–Leibler divergence or the square of the difference between the induction's prediction and the probability assigned by the (stochastic) data generating process.

Solomonoff's uncomputability

Unfortunately, Solomonoff also proved that Solomonoff's induction is uncomputable. In fact, he showed that computability and completeness are mutually exclusive: any complete theory must be uncomputable. The proof of this is derived from a game between the induction and the environment. Essentially, any computable induction can be tricked by a computable environment, by choosing the computable environment that negates the computable induction's prediction. This fact can be regarded as an instance of the no free lunch theorem .

Modern applications

Artificial intelligence

Though Solomonoff's inductive inference is not computable, several AIXI-derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematical limit is Solomonoff's inductive inference). [11] [12] [13]

Another direction of inductive inference is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. [14] The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an index e with respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values of f). A learner M learns a function f if almost all its hypotheses are the same index e, which generates the function f; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable.[ citation needed ] Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities, [15] which are kinds of super-recursive algorithms.

See also

Related Research Articles

<span class="mw-page-title-main">Kolmogorov complexity</span> Measure of algorithmic complexity

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.

In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

In philosophy, Occam's razor is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony. Attributed to William of Ockham, a 14th-century English philosopher and theologian, it is frequently cited as Entia non sunt multiplicanda praeter necessitatem, which translates as "Entities must not be multiplied beyond necessity", although Occam never used these exact words. Popularly, the principle is sometimes paraphrased as "The simplest explanation is usually the best one."

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Fundamentally, Bayesian inference uses prior knowledge, in the form of a prior distribution in order to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical rather than mathematical induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.

Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal information theory restatement of Occam's Razor: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct. MML was invented by Chris Wallace, first appearing in the seminal paper "An information measure for classification". MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice. It differs from the related concept of Kolmogorov complexity in that it does not require use of a Turing-complete language to model data.

Minimum Description Length (MDL) is a model selection principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications of Occam's razor. The MDL principle can be extended to other forms of inductive inference and learning, for example to estimation and sequential prediction, without explicitly identifying a single model of the data.

In computer science, computational learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.

Ray Solomonoff was an American mathematician who invented algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

<span class="mw-page-title-main">Algorithmic probability</span>

In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.

In mathematical folklore, the "no free lunch" (NFL) theorem of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning.

<span class="mw-page-title-main">No free lunch in search and optimization</span> Average solution cost is the same with any method

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure that can be exploited more efficiently than random search or even has closed-form solutions that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning. Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

<span class="mw-page-title-main">Marcus Hutter</span> Computer scientist

Marcus Hutter is a professor and artificial intelligence researcher. As a Senior Scientist at DeepMind, he is researching the mathematical foundations of artificial general intelligence. He is on leave from his professorship at the ANU College of Engineering and Computer Science of the Australian National University in Canberra, Australia. Hutter studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber's group at the Istituto Dalle Molle di Studi sull'Intelligenza Artificiale in Manno, Switzerland. He developed a mathematical theory of artificial general intelligence. His book Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability was published by Springer in 2005.

Simplicity theory is a cognitive theory that seeks to explain the attractiveness of situations or events to human minds. It is based on work done by scientists like behavioural scientist Nick Chater, computer scientist Paul Vitanyi, psychologist Jacob Feldman, and artificial intelligence researchers Jean-Louis Dessalles and Jürgen Schmidhuber. It claims that interesting situations appear simpler than expected to the observer.

AIXI is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000 and several results regarding AIXI are proved in Hutter's 2005 book Universal Artificial Intelligence.

Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world.

In computational learning theory, Occam learning is a model of algorithmic learning where the objective of the learner is to output a succinct representation of received training data. This is closely related to probably approximately correct (PAC) learning, where the learner is evaluated on its predictive power of a test set.

Universality probability is an abstruse probability measure in computational complexity theory that concerns universal Turing machines.

The following outline is provided as an overview of and topical guide to machine learning:

References

  1. Zenil, Hector (2011-02-11). Randomness Through Computation: Some Answers, More Questions. World Scientific. ISBN   978-981-4462-63-1.
  2. 1 2 Solomonoff, Ray J. (2009), Emmert-Streib, Frank; Dehmer, Matthias (eds.), "Algorithmic Probability: Theory and Applications", Information Theory and Statistical Learning, Boston, MA: Springer US, pp. 1–23, doi:10.1007/978-0-387-84816-7_1, ISBN   978-0-387-84816-7 , retrieved 2020-07-21
  3. 1 2 Hoang, Lê Nguyên (2020). The equation of knowledge : from Bayes' rule to a unified philosophy of science (First ed.). Boca Raton, FL. ISBN   978-0-367-85530-7. OCLC   1162366056.{{cite book}}: CS1 maint: location missing publisher (link)
  4. 1 2 JJ McCall. Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov – Metroeconomica, 2004 – Wiley Online Library.
  5. 1 2 D Stork. Foundations of Occam's razor and parsimony in learning from ricoh.com – NIPS 2001 Workshop, 2001
  6. 1 2 A.N. Soklakov. Occam's razor as a formal basis for a physical theory from arxiv.org – Foundations of Physics Letters, 2002 – Springer
  7. 1 2 Jose Hernandez-Orallo (1999). "Beyond the Turing Test" (PDF). Journal of Logic, Language and Information. 9.
  8. 1 2 M Hutter. On the existence and convergence of computable universal priors arxiv.org – Algorithmic Learning Theory, 2003 – Springer
  9. Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
  10. Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008p 339 ff.
  11. J. Veness, K.S. Ng, M. Hutter, W. Uther, D. Silver. "A Monte Carlo AIXI Approximation" – Arxiv preprint, 2009 arxiv.org
  12. J. Veness, K.S. Ng, M. Hutter, D. Silver. "Reinforcement Learning via AIXI Approximation" Arxiv preprint, 2010 – aaai.org
  13. S. Pankov. A computational approximation to the AIXI model from agiri.org – Artificial general intelligence, 2008: proceedings of …, 2008 – books.google.com
  14. Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi: 10.1016/S0019-9958(67)91165-5 .
  15. J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit" (PDF). International Journal of Foundations of Computer Science. 13 (4): 587–612. doi:10.1142/S0129054102001291.

Sources