Impossibility of a gambling system

Last updated
A random walk on a cubic three-dimensional lattice. Marche aleatoire 3d bis.JPG
A random walk on a cubic three-dimensional lattice.

The principle of the impossibility of a gambling system is a concept in probability. It states that in a random sequence, the methodical selection of subsequences does not change the probability of specific elements. The first mathematical demonstration is attributed to Richard von Mises (who used the term collective rather than sequence). [1] [2]

The principle states that no method for forming a subsequence of a random sequence (the gambling system) improves the odds for a specific event. For instance, a sequence of fair coin tosses produces equal and independent 50/50 chances for heads and tails. A simple system of betting on heads every 3rd, 7th, or 21st toss, etc., does not change the odds of winning in the long run. As a mathematical consequence of computability theory, more complicated betting strategies (such as a martingale) also cannot alter the odds in the long run.

Von Mises' mathematical demonstration defines an infinite sequence of zeros and ones as a random sequence if it is not biased by having the frequency stability property. With this property, the frequency of zeroes in the sequence stabilizes at 1/2, and every possible subsequence selected by any systematic method is likewise not biased. [3]

The subsequence selection criterion is important, because although the sequence 0101010101... is not biased, selecting the odd positions results in 000000... which is not random. Von Mises did not fully define what constituted a "proper" selection rule for subsequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N+1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability. [4] [5] [6]

In the mid-1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule. [7] [8] In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet.

The principle influenced modern concepts in randomness, e.g. the work by A. N. Kolmogorov in considering a finite sequence random (with respect to a class of computing systems) if any program that can generate the sequence is at least as long as the sequence itself. [9] [10]

See also

Related Research Articles

<span class="mw-page-title-main">Kolmogorov complexity</span> Measure of algorithmic complexity

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.

In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

<span class="mw-page-title-main">Frequentist probability</span> Interpretation of probability

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. The continued use of frequentist methods in scientific inference, however, has been called into question.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

Minimum Description Length (MDL) is a model selection principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications of Occam's razor. The MDL principle can be extended to other forms of inductive inference and learning, for example to estimation and sequential prediction, without explicitly identifying a single model of the data.

Solomonoff's theory of inductive inference is a mathematical proof that if a universe is generated by an algorithm, then observations of that universe, encoded as a dataset, are best predicted by the smallest executable archive of that dataset. This formalization of Occam's razor for induction was introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

<span class="mw-page-title-main">Richard von Mises</span> Austrian physicist and mathematician

Richard Edler von Mises was an Austrian scientist and mathematician who worked on solid mechanics, fluid mechanics, aerodynamics, aeronautics, statistics and probability theory. He held the position of Gordon McKay Professor of Aerodynamics and Applied Mathematics at Harvard University. He described his work in his own words shortly before his death as being on

practical analysis, integral and differential equations, mechanics, hydrodynamics and aerodynamics, constructive geometry, probability calculus, statistics and philosophy.

<span class="mw-page-title-main">Per Martin-Löf</span> Swedish logician, philosopher, and mathematical statistician

Per Erik Rutger Martin-Löf is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer science. Since the late 1970s, Martin-Löf's publications have been mainly in logic. In philosophical logic, Martin-Löf has wrestled with the philosophy of logical consequence and judgment, partly inspired by the work of Brentano, Frege, and Husserl. In mathematical logic, Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics; Martin-Löf's work on type theory has influenced computer science.

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

Intuitively, an algorithmically random sequence is a sequence of binary digits that appears random to any algorithm running on a universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet. Random sequences are key objects of study in algorithmic information theory.

In computer science, the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics. The longest increasing subsequence problem is solvable in time where denotes the length of the input sequence.

Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.

<span class="mw-page-title-main">Randomness</span> Apparent lack of pattern or predictability in events

In common usage, randomness is the apparent or actual lack of pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.

<span class="mw-page-title-main">History of randomness</span> Aspect of history

In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. At the same time, most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.

In combinatorial mathematics, probability, and computer science, in the longest alternating subsequence problem, one wants to find a subsequence of a given sequence in which the elements are in alternating order, and in which the sequence is as long as possible.

References

  1. Probability, Statistics and Truth by Richard von Mises 1928/1981 Dover, ISBN   0-486-24214-5 page 25
  2. Counting for something: statistical principles and personalities by William Stanley Peters 1986 ISBN   0-387-96364-2 page 3
  3. Laurant Bienvenu "Kolmogorov Loveland Stochastocity" in STACS 2007: 24th Annual Symposium on Theoretical Aspects of Computer Science by Wolfgang Thomas ISBN   3-540-70917-7 page 260
  4. Alonzo Church, "On the Concept of Random Sequence," Bull. Amer. Math. Soc., 46 (1940), 254–260
  5. Companion encyclopedia of the history and philosophy Volume 2, by Ivor Grattan-Guinness 0801873975 page 1412
  6. J. Alberto Coffa, Randomness and Knowledge in "PSA 1972: proceedings of the 1972 Biennial Meeting Philosophy of Science Association, Volume 20, Springer 1974 ISBN   90-277-0408-2 page 106
  7. A. N. Kolmogorov, Three approaches to the quantitative definition of information Problems of Information and Transmission, 1(1):1--7, 1965.
  8. D.W. Loveland, A new interpretation of von Mises' concept of random sequence Z. Math. Logik Grundlagen Math 12 (1966) 279-294
  9. An introduction to probability and inductive logic 2001 by Ian Hacking ISBN   0-521-77501-9 page 145
  10. Creating modern probability by Jan Von Plato 1998 ISBN   0-521-59735-8 pages 23-24