Reference class problem

Last updated

In statistics, the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case.

Contents

For example, to estimate the probability of an aircraft crashing, we could refer to the frequency of crashes among various different sets of aircraft: all aircraft, this make of aircraft, aircraft flown by this company in the last ten years, etc. In this example, the aircraft for which we wish to calculate the probability of a crash is a member of many different classes, in which the frequency of crashes differs. It is not obvious which class we should refer to for this aircraft. In general, any case is a member of very many classes among which the frequency of the attribute of interest differs. The reference class problem discusses which class is the most appropriate to use.

More formally, many arguments in statistics take the form of a statistical syllogism:

  1. proportion of are
  2. is an
  3. Therefore, the chance that is a is

is called the "reference class" and is the "attribute class" and is the individual object. How is one to choose an appropriate class ?

In Bayesian statistics, the problem arises as that of deciding on a prior probability for the outcome in question (or when considering multiple outcomes, a prior probability distribution).

History

John Venn stated in 1876 that "every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things", leading to problems with how to assign probabilities to a single case. He used as an example the probability that John Smith, a consumptive Englishman aged fifty, will live to sixty-one. [1]

The name "problem of the reference class" was given by Hans Reichenbach, who wrote, "If we are asked to find the probability holding for an individual future event, we must first incorporate the event into a suitable reference class. An individual thing or event may be incorporated in many reference classes, from which different probabilities will result." [2]

There has also been discussion of the reference class problem in philosophy [3] and in the life sciences, e.g., clinical trial prediction. [4]

Applying Bayesian probability in practice involves assessing a prior probability which is then applied to a likelihood function and updated through the use of Bayes' theorem. Suppose we wish to assess the probability of guilt of a defendant in a court case in which DNA (or other probabilistic) evidence is available. We first need to assess the prior probability of guilt of the defendant. We could say that the crime occurred in a city of 1,000,000 people, of whom 15% meet the requirements of being the same sex, age group and approximate description as the perpetrator. That suggests a prior probability of guilt of 1 in 150,000. We could cast the net wider and say that there is, say, a 25% chance that the perpetrator is from out of town, but still from this country, and construct a different prior estimate. We could say that the perpetrator could come from anywhere in the world, and so on.

Legal theorists have discussed the reference class problem particularly with reference to the Shonubi case. Charles Shonubi, a Nigerian drug smuggler, was arrested at JFK Airport on Dec 10, 1991, and convicted of heroin importation. The severity of his sentence depended not only on the amount of drugs on that trip, but the total amount of drugs he was estimated to have imported on seven previous occasions on which he was not caught. Five separate legal cases debated how that amount should be estimated. In one case, "Shonubi III", the prosecution presented statistical evidence of the amount of drugs found on Nigerian drug smugglers caught at JFK Airport in the period between Shonubi's first and last trips. There has been debate over whether that is the (or a) correct reference class to use, and if so, why. [5] [6]

Other legal applications involve valuation. For example, houses might be valued using the data in a database of house sales of "similar" houses. To decide on which houses are similar to a given one, one needs to know which features of a house are relevant to price. Number of bathrooms might be relevant, but not the eye color of the owner. It has been argued that such reference class problems can be solved by finding which features are relevant: a feature is relevant to house price if house price covaries with it (it affects the likelihood that the house has a higher or lower value), and the ideal reference class for an individual is the set of all instances which share with it all relevant features. [7] [8]

See also

Related Research Articles

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Raven paradox</span> Paradox arising from the question of what constitutes evidence for a statement

The raven paradox, also known as Hempel's paradox, Hempel's ravens, or rarely the paradox of indoor ornithology, is a paradox arising from the question of what constitutes evidence for the truth of a statement. Observing objects that are neither black nor ravens may formally increase the likelihood that all ravens are black even though, intuitively, these observations are unrelated.

In probability theory and statistics, Bayes' theorem, named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Fundamentally, Bayesian inference uses prior knowledge, in the form of a prior distribution in order to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior probability contains everything there is to know about an uncertain proposition, given prior knowledge and a mathematical model describing the observations available at a particular time. After the arrival of new information, the current posterior probability may serve as the prior in another round of Bayesian updating.

A prior probability distribution of an uncertain quantity, often simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

<span class="mw-page-title-main">Doomsday argument</span> Doomsday scenario on human births

The doomsday argument (DA), or Carter catastrophe, is a probabilistic argument that claims to predict the future population of the human species based on an estimation of the number of humans born to date. The doomsday argument was originally proposed by the astrophysicist Brandon Carter in 1983, leading to the initial name of the Carter catastrophe. The argument was subsequently championed by the philosopher John A. Leslie and has since been independently conceived by J. Richard Gott and Holger Bech Nielsen. Similar principles of eschatology were proposed earlier by Heinz von Foerster, among others. A more general form was given earlier in the Lindy effect, which proposes that for certain phenomena, the future life expectancy is proportional to the current age and is based on a decreasing mortality rate over time.

A statistical syllogism is a non-deductive syllogism. It argues, using inductive reasoning, from a generalization true for the most part to a particular case.

In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation.

In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis.

<span class="mw-page-title-main">Scoring rule</span> Measure for evaluating probabilistic forecasts

In decision theory, a scoring rule provides a summary measure for the evaluation of probabilistic predictions or forecasts. It is applicable to tasks in which predictions assign probabilities to events, i.e. one issues a probability distribution as prediction. This includes probabilistic classification of a set of mutually exclusive outcomes or classes.

Credibility theory is a branch of actuarial mathematics concerned with determining risk premiums. To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of the Bayesian predictive density, which is why credibility theory has many results in common with linear filtering as well as Bayesian statistics more broadly.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

Henry E. Kyburg Jr. (1928–2007) was Gideon Burbank Professor of Moral Philosophy and Professor of Computer Science at the University of Rochester, New York, and Pace Eminent Scholar at the Institute for Human and Machine Cognition, Pensacola, Florida. His first faculty posts were at Rockefeller Institute, University of Denver, Wesleyan College, and Wayne State University.

Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.

Bayesian hierarchical modelling is a statistical model written in multiple levels that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired.

A prior-free mechanism (PFM) is a mechanism in which the designer does not have any information on the agents' valuations, not even that they are random variables from some unknown probability distribution.

Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.

References

  1. J. Venn,The Logic of Chance (2nd ed, 1876), p. 194.
  2. H. Reichenbach, The Theory of Probability (1949), p. 374
  3. A. Hájek, The Reference Class Problem is Your Problem Too, Synthese 156 (2007): 185-215.
  4. Atanasov, Pavel D.; Joseph, Regina; Feijoo, Felipe; Marshall, Max; Siddiqui, Sauleh (2021-12-09). "Human Forest vs. Random Forest in Time-Sensitive COVID-19 Clinical Trial Prediction". Rochester, NY. SSRN   3981732.{{cite journal}}: Cite journal requires |journal= (help)
  5. M. Colyvan, H.M. Regan and S. Ferson, Is it a crime to belong to a reference class?, Journal of Political Philosophy 9 (2001): 168-181
  6. Tillers, Peter (2005). "If wishes were horses: discursive comments on attempts to prevent individuals from being unfairly burdened by their reference classes". Law, Probability and Risk. 4 (1–2): 33–49. doi: 10.1093/lpr/mgi001 .
  7. Franklin, James (Mar 2010). "Feature selection methods for solving the reference class problem" (PDF). Columbia Law Review Sidebar. 110. Retrieved 30 June 2021.
  8. Franklin, James (2011). "The objective Bayesian conceptualisation of proof and reference class problems". Sydney Law Review. 33: 545–561. Retrieved 30 June 2021.