Propensity probability

Last updated

The propensity theory of probability is a probability interpretation in which the probability is thought of as a physical propensity, disposition, or tendency of a given type of situation to yield an outcome of a certain kind, or to yield a long-run relative frequency of such an outcome. [1]

Contents

Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. Stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. These single-case probabilities are known as propensities or chances.

In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular moment.

History

A propensity theory of probability was given by Charles Sanders Peirce. [2] [3] [4] [5]

Karl Popper

A later propensity theory was proposed [6] by philosopher Karl Popper, who had only slight acquaintance with the writings of Charles S. Peirce, however. [2] [3] Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions G has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. Thus the propensity p for E to occur depends upon G:. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have the same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) imply something less than determinism and yet still causal dependence on the generating conditions.

Recent work

A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's, in that propensities are defined in terms of either long-run or infinitely long-run relative frequencies.

Other propensity theorists (e.g. Ronald Giere [7] ) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argue, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.

Other theories have been offered by D. H. Mellor, [8] and Ian Hacking. [9]

Ballentine developed an axiomatic propensity theory [10] building on the work of Paul Humphreys. [11] They show that the causal nature of the condition in propensity conflicts with an axiom needed for Bayes' theorem.

Principal principle of David Lewis

What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the principal principle, [12] The principle states:

  • The Principal Principle. Let C be any reasonable initial credence function. Let t be any time. Let x be any real number in the unit interval. Let X be the proposition that the chance, at time t, of A's holding equals x. Let E be any proposition compatible with X that is admissible at time t. Then C(AIXE) = x.

Thus, for example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct credence? According to the Principal Principle, the correct credence is .32.

See also

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

<span class="mw-page-title-main">Charles Sanders Peirce</span> American thinker who founded pragmatism (1839–1914)

Charles Sanders Peirce was an American scientist, mathematician, logician, and philosopher who is sometimes known as "the father of pragmatism". According to philosopher Paul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician". Bertrand Russell wrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".

<span class="mw-page-title-main">Many-worlds interpretation</span> Interpretation of quantum mechanics that denies the collapse of the wavefunction

The many-worlds interpretation (MWI) is a philosophical position about how the mathematics used in quantum mechanics relates to physical reality. It asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe. In contrast to some other interpretations, the evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s.

<span class="mw-page-title-main">Frequentist probability</span> Interpretation of probability

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. The continued use of frequentist methods in scientific inference, however, has been called into question.

<span class="mw-page-title-main">Probability</span> Branch of mathematics concerning chance and uncertainty

Probability is the branch of mathematics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Although quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters.

Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are putative properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."

Quantum indeterminacy is the apparent necessary incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that

The principle of indifference is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence equally among all the possible outcomes under consideration.

The term Inductive reasoning is used to refer to any method of reasoning in which broad generalizations or principles are derived from a body of observations. This article is concerned with the inductive reasoning other than deductive reasoning, where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of an inductive argument is at best probable, based upon the evidence given.

Indeterminism is the idea that events are not caused, or are not caused deterministically.

The ensemble interpretation of quantum mechanics considers the quantum state description to apply only to an ensemble of similarly prepared systems, rather than supposing that it exhaustively represents an individual physical system.

Popper's experiment is an experiment proposed by the philosopher Karl Popper to test aspects of the uncertainty principle in quantum mechanics.

Henry E. Kyburg Jr. (1928–2007) was Gideon Burbank Professor of Moral Philosophy and Professor of Computer Science at the University of Rochester, New York, and Pace Eminent Scholar at the Institute for Human and Machine Cognition, Pensacola, Florida. His first faculty posts were at Rockefeller Institute, University of Denver, Wesleyan College, and Wayne State University.

<span class="mw-page-title-main">Randomness</span> Apparent lack of pattern or predictability in events

In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a known probability distribution, the frequency of different outcomes over repeated events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.

The following is a timeline of probability and statistics.

<span class="mw-page-title-main">History of randomness</span>

In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. At the same time, most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.

References

  1. 'Interpretations of Probability', Stanford Encyclopedia of Philosophy . Retrieved 23 December 2006.
  2. 1 2 Miller, Richard W. (1975). "Propensity: Popper or Peirce?". British Journal for the Philosophy of Science . 26 (2): 123–132. doi:10.1093/bjps/26.2.123.
  3. 1 2 Haack, Susan; Kolenda, Konstantin, Konstantin; Kolenda (1977). "Two Fallibilists in Search of the Truth". Proceedings of the Aristotelian Society. 51 (Supplementary Volumes): 63–104. doi:10.1093/aristoteliansupp/51.1.63. JSTOR   4106816.
  4. Burks, Arthur W. (1978). Chance, Cause and Reason: An Inquiry into the Nature of Scientific Evidence. University of Chicago Press. pp.  694 pages. ISBN   978-0-226-08087-1.
  5. Peirce, Charles Sanders and Burks, Arthur W., ed. (1958), the Collected Papers of Charles Sanders Peirce Volumes 7 and 8, Harvard University Press, Cambridge, MA, also Belknap Press (of Harvard University Press) edition, vols. 7-8 bound together, 798 pages, online via InteLex, reprinted in 1998 Thoemmes Continuum.
  6. Popper, Karl R. (1959). "The Propensity Interpretation of Probability". The British Journal for the Philosophy of Science. 10 (37): 25–42. doi:10.1093/bjps/X.37.25. ISSN   0007-0882. JSTOR   685773.
  7. Ronald N. Giere (1973). "Objective Single Case Probabilities and the Foundations of Statistics". Studies in Logic and the Foundations of Mathematics. Vol. 73. pp. 467–483. doi:10.1016/S0049-237X(09)70380-5. ISBN   978-0-444-10491-5.
  8. D. H. Mellor (1971). The Matter of Chance. Cambridge University Press. ISBN   978-0521615983.
  9. Ian Hacking (1965). Logic of Statistical Inference. Cambridge University Press. ISBN   9781316508145.
  10. Ballentine, Leslie E. (August 2016). "Propensity, Probability, and Quantum Theory". Foundations of Physics. 46 (8): 973–1005. doi:10.1007/s10701-016-9991-0. ISSN   0015-9018. S2CID   254508686.
  11. Humphreys, Paul (October 1985). "Why Propensities Cannot be Probabilities". The Philosophical Review. 94 (4): 557–570. doi:10.2307/2185246. JSTOR   2185246. S2CID   55871596.
  12. Lewis, David (1980). "A Subjectivist's Guide to Objective Chance". In Jeffrey, R. (ed.). Studies in Inductive Logic and Probability. Vol. 2. Berkeley: University of California Press. pp. 263–293. ISBN   0-520-03826-6.

Further reading