Evidence under Bayes' theorem

Last updated

The use of evidence under Bayes' theorem relates to the probability of finding evidence in relation to the accused, where Bayes' theorem concerns the probability of an event and its inverse. Specifically, it compares the probability of finding particular evidence if the accused were guilty, versus if they were not guilty. An example would be the probability of finding a person's hair at the scene, if guilty, versus if just passing through the scene. Another issue would be finding a person's DNA where they lived, regardless of committing a crime there.

Contents

Explanation

Among evidence scholars, the study of evidence in recent decades has become broadly interdisciplinary, incorporating insights from psychology, economics, and probability theory. One area of particular interest and controversy has been Bayes' theorem. [1] Bayes' theorem is an elementary proposition of probability theory. It provides a way of updating, in light of new information, one’s probability that a proposition is true. Evidence scholars have been interested in its application to their field, either to study the value of rules of evidence, or to help determine facts at trial.

Suppose, that the proposition to be proven is that defendant was the source of a hair found at the crime scene. Before learning that the hair was a genetic match for the defendant’s hair, the factfinder believes that the odds are 2 to 1 that the defendant was the source of the hair. If they used Bayes’ theorem, they could multiply those prior odds by a “likelihood ratio” in order to update her odds after learning that the hair matched the defendant’s hair. The likelihood ratio is a statistic derived by comparing the odds that the evidence (expert testimony of a match) would be found if the defendant was the source with the odds that it would be found if defendant was not the source. If it is ten times more likely that the testimony of a match would occur if the defendant was the source than if not, then the factfinder should multiply their prior odds by ten, giving posterior odds of 20 to one.

Bayesian skeptics have objected to this use of Bayes’ theorem in litigation on a variety of grounds. These run from jury confusion and computational complexity to the assertion that standard probability theory is not a normatively satisfactory basis for adjudication of rights.

Bayesian enthusiasts have replied on two fronts. First, they have said that whatever its value in litigation, Bayes' theorem is valuable in studying evidence rules. For example, it can be used to model relevance. It teaches that the relevance of evidence that a proposition is true depends on how much the evidence changes the prior odds, and that how much it changes the prior odds depends on how likely the evidence would be found (or not) if the proposition were true. These basic insights are also useful in studying individual evidence rules, such as the rule allowing witnesses to be impeached with prior convictions.

Second, they have said that it is practical to use Bayes' theorem in a limited set of circumstances in litigation (such as integrating genetic match evidence with other evidence), and that assertions that probability theory is inappropriate for judicial determinations are nonsensical or inconsistent.

Some observers believe that in recent years (i) the debate about probabilities has become stagnant, (ii) the protagonists in the probabilities debate have been talking past each other, (iii) not much is happening at the high-theory level, and (iv) the most interesting work is in the empirical study of the efficacy of instructions on Bayes’ theorem in improving jury accuracy. However, it is possible that this skepticism about the probabilities debate in law rests on observations of the arguments made by familiar protagonists in the legal academy. In fields outside of law, work on formal theories relating to uncertainty continues unabated. One important development has been the work on "soft computing" such as has been carried on, for example, at Berkeley under Lotfi Zadeh's BISC (Berkeley Initiative in Soft Computing). Another example is the increasing amount of work, by people both in and outside law, on "argumentation" theory. Also, work on Bayes nets continues. Some of this work is beginning to filter into legal circles. See, for example, the many papers on formal approaches to uncertainty (including Bayesian approaches) in the Oxford journal: Law, Probability and Risk .

Examples

There are some famous cases where Bayes' theorem can be applied.

See also

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

In probability theory and statistics, Bayes' theorem, named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

<span class="mw-page-title-main">Thomas Bayes</span> British statistician (c. 1701 – 1761)

Thomas Bayes was an English statistician, philosopher and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name—Bayes' theorem. Bayes never published what would become his most famous accomplishment; his notes were edited and published posthumously by Richard Price.

In probability theory, odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics.

<span class="mw-page-title-main">Dempster–Shafer theory</span> Mathematical framework to model epistemic uncertainty

The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.

<span class="mw-page-title-main">Base rate fallacy</span> Error in thinking which involves under-valuing base rate information

The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate in favor of the individuating information . Base rate neglect is a specific form of the more general extension neglect.

The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in questions can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, but since it uses the (integrated) marginal likelihood rather than the maximized likelihood, both tests only coincide under simple hypotheses. Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.

<i>R v Adams</i> Ruling which criticised the use of Bayesian statistics in determining guilt

R v Adams [1996] EWCA Crim 10 and 222, are rulings in the United Kingdom that banned the expression in court of headline (soundbite), standalone Bayesian statistics from the reasoning admissible before a jury in DNA evidence cases, in favour of the calculated average number of matching incidences among the nation's population. The facts involved strong but inconclusive evidence conflicting with the DNA evidence, leading to a retrial.

In statistics, the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case.

Probabilistic logic involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logic truth tables with probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.

Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.

An Essay towards solving a Problem in the Doctrine of Chances is a work on the mathematical theory of probability by Thomas Bayes, published in 1763, two years after its author's death, and containing multiple amendments and additions due to his friend Richard Price. The title comes from the contemporary use of the phrase "doctrine of chances" to mean the theory of probability, which had been introduced via the title of a book by Abraham de Moivre. Contemporary reprints of the Essay carry a more specific and significant title: A Method of Calculating the Exact Probability of All Conclusions founded on Induction.

<span class="mw-page-title-main">Bayesian inference in marketing</span>

In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data.

<span class="mw-page-title-main">Forensic epidemiology</span>

The discipline of forensic epidemiology (FE) is a hybrid of principles and practices common to both forensic medicine and epidemiology. FE is directed at filling the gap between clinical judgment and epidemiologic data for determinations of causality in civil lawsuits and criminal prosecution and defense.

Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.

References

  1. "Bayes' Theorem in the Court of Appeal | Law Articles", Bernard Robertson, Tony Vignaux (on R v Adams ), LawIntl-2451.