Part of a series on |
Bayesian statistics |
---|
Posterior = Likelihood × Prior ÷ Evidence |
Background |
Model building |
Posterior approximation |
Estimators |
Evidence approximation |
Model evaluation |
Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. [1] One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.
Traditional epistemology and Bayesian epistemology are both forms of epistemology, but they differ in various respects, for example, concerning their methodology, their interpretation of belief, the role justification or confirmation plays in them and some of their research interests. Traditional epistemology focuses on topics such as the analysis of the nature of knowledge, usually in terms of justified true beliefs, the sources of knowledge, like perception or testimony, the structure of a body of knowledge, for example in the form of foundationalism or coherentism, and the problem of philosophical skepticism or the question of whether knowledge is possible at all. [2] [3] These inquiries are usually based on epistemic intuitions and regard beliefs as either present or absent. [4] Bayesian epistemology, on the other hand, works by formalizing concepts and problems, which are often vague in the traditional approach. It thereby focuses more on mathematical intuitions and promises a higher degree of precision. [1] [4] It sees belief as a continuous phenomenon that comes in various degrees, so-called credences. [5] Some Bayesians have even suggested that the regular notion of belief should be abandoned. [6] But there are also proposals to connect the two, for example, the Lockean thesis, which defines belief as credence above a certain threshold. [7] [8] Justification plays a central role in traditional epistemology while Bayesians have focused on the related notions of confirmation and disconfirmation through evidence. [5] The notion of evidence is important for both approaches but only the traditional approach has been interested in studying the sources of evidence, like perception and memory. Bayesianism, on the other hand, has focused on the role of evidence for rationality: how someone's credence should be adjusted upon receiving new evidence. [5] There is an analogy between the Bayesian norms of rationality in terms of probabilistic laws and the traditional norms of rationality in terms of deductive consistency. [5] [6] Certain traditional problems, like the topic of skepticism about our knowledge of the external world, are difficult to express in Bayesian terms. [5]
Bayesian epistemology is based only on a few fundamental principles, which can be used to define various other notions and can be applied to many topics in epistemology. [5] [4] At their core, these principles constitute constraints on how we should assign credences to propositions. They determine what an ideally rational agent would believe. [6] The basic principles can be divided into synchronic or static principles, which govern how credences are to be assigned at any moment, and diachronic or dynamic principles, which determine how the agent should change their beliefs upon receiving new evidence. The axioms of probability and the principal principle belong to the static principles while the principle of conditionalization governs the dynamic aspects as a form of probabilistic inference. [6] [4] The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. [4] This test for determining irrationality has been referred to as the "pragmatic self-defeat test". [6]
One important difference to traditional epistemology is that Bayesian epistemology focuses not on the notion of simple belief but on the notion of degrees of belief, so-called credences . [1] This approach tries to capture the idea of certainty: [4] we believe in all kinds of claims but we are more certain about some, like that the earth is round, than about others, like that Plato was the author of the First Alcibiades. These degrees come in values between 0 and 1. A degree of 1 implies that a claim is completely accepted. A degree of 0, on the other hand, corresponds to full disbelief. This means that the claim is fully rejected and the person firmly believes the opposite claim. A degree of 0.5 corresponds to suspension of belief, meaning that the person has not yet made up their mind: they have no opinion either way and thus neither accept nor reject the claim. According to the Bayesian interpretation of probability, credences stand for subjective probabilities. Following Frank P. Ramsey, they are interpreted in terms of the willingness to bet money on a claim. [9] [1] [4] So having a credence of 0.8 (i.e. 80 %) that your favorite soccer team will win the next game would mean being willing to bet up to four dollars for the chance to make one dollar profit. This account draws a tight connection between Bayesian epistemology and decision theory. [10] [11] It might seem that betting-behavior is only one special area and as such not suited for defining such a general notion as credences. But, as Ramsey argues, we bet all the time when understood in the widest sense. For example, in going to the train station, we bet on the train being there on time, otherwise we would have stayed at home. [4] It follows from the interpretation of credence in terms of willingness to make bets that it would be irrational to ascribe a credence of 0 or 1 to any proposition, except for contradictions and tautologies. [6] The reason for this is that ascribing these extreme values would mean that one would be willing to bet anything, including one's life, even if the payoff was minimal. [1] Another negative side-effect of such extreme credences is that they are permanently fixed and cannot be updated anymore upon acquiring new evidence.
This central tenet of Bayesianism, that credences are interpreted as subjective probabilities and are therefore governed by the norms of probability, has been referred to as probabilism. [10] These norms express the nature of the credences of ideally rational agents. [4] They do not put demands on what credence we should have on any single given belief, for example, whether it will rain tomorrow. Instead, they constrain the system of beliefs as a whole. [4] For example, if your credence that it will rain tomorrow is 0.8 then your credence in the opposite proposition, i.e. that it will not rain tomorrow, should be 0.2, not 0.1 or 0.5. According to Stephan Hartmann and Jan Sprenger, the axioms of probability can be expressed through the following two laws: (1) for any tautology ; (2) For incompatible (mutually exclusive) propositions and , . [4]
Another important Bayesian principle of degrees of beliefs is the principal principle due to David Lewis. [10] It states that our knowledge of objective probabilities should correspond to our subjective probabilities in the form of credences. [4] [5] So if you know that the objective chance of a coin landing heads is 50% then your credence that the coin will land heads should be 0.5.
The axioms of probability together with the principal principle determines the static or synchronic aspect of rationality: what an agent's beliefs should be like when only considering one moment. [1] But rationality also involves a dynamic or diachronic aspect, which comes to play for changing one's credences upon being confronted with new evidence. This aspect is determined by the principle of conditionalization. [1] [4]
The principle of conditionalization governs how the agent's credence in a hypothesis should change upon receiving new evidence for or against this hypothesis. [6] [10] As such, it expresses the dynamic aspect of how ideal rational agents would behave. [1] It is based on the notion of conditional probability, which is the measure of the probability that one event occurs given that another event has already occurred. The unconditional probability that will occur is usually expressed as while the conditional probability that will occur given that B has already occurred is written as . For example, the probability of flipping a coin two times and the coin landing heads two times is only 25%. But the conditional probability of this occurring given that the coin has landed heads on the first flip is then 50%. The principle of conditionalization applies this idea to credences: [1] we should change our credence that the coin will land heads two times upon receiving evidence that it has already landed heads on the first flip. The probability assigned to the hypothesis before the event is called prior probability. [12] The probability afterward is called posterior probability. According to the simple principle of conditionalization, this can be expressed in the following way: . [1] [6] So the posterior probability that the hypothesis is true is equal to the conditional prior probability that the hypothesis is true relative to the evidence, which is equal to the prior probability that both the hypothesis and the evidence are true, divided by the prior probability that the evidence is true. The original expression of this principle, referred to as Bayes' theorem, can be directly deduced from this formulation. [6]
The simple principle of conditionalization makes the assumption that our credence in the acquired evidence, i.e. its posterior probability, is 1, which is unrealistic. For example, scientists sometimes need to discard previously accepted evidence upon making new discoveries, which would be impossible if the corresponding credence was 1. [6] An alternative form of conditionalization, proposed by Richard Jeffrey, adjusts the formula to take the probability of the evidence into account: [13] [14] . [6]
A Dutch book is a series of bets that necessarily results in a loss. [15] [16] An agent is vulnerable to a Dutch book if their credences violate the laws of probability. [4] This can be either in synchronic cases, in which the conflict happens between beliefs held at the same time, or in diachronic cases, in which the agent does not respond properly to new evidence. [6] [16] In the most simple synchronic case, only two credences are involved: the credence in a proposition and in its negation. [17] The laws of probability hold that these two credences together should amount to 1 since either the proposition or its negation are true. Agents who violate this law are vulnerable to a synchronic Dutch book. [6] For example, given the proposition that it will rain tomorrow, suppose that an agent's degree of belief that it is true is 0.51 and the degree that it is false is also 0.51. In this case, the agent would be willing to accept two bets at $0.51 for the chance to win $1: one that it will rain and another that it will not rain. The two bets together cost $1.02, resulting in a loss of $0.02, no matter whether it will rain or not. [17] The principle behind diachronic Dutch books is the same, but they are more complicated since they involve making bets before and after receiving new evidence and have to take into account that there is a loss in each case no matter how the evidence turns out to be. [17] [16]
There are different interpretations about what it means that an agent is vulnerable to a Dutch book. On the traditional interpretation, such a vulnerability reveals that the agent is irrational since they would willingly engage in behavior that is not in their best self-interest. [6] One problem with this interpretation is that it assumes logical omniscience as a requirement for rationality, which is problematic especially in complicated diachronic cases. An alternative interpretation uses Dutch books as "a kind of heuristic for determining when one's degrees of belief have the potential to be pragmatically self-defeating". [6] This interpretation is compatible with holding a more realistic view of rationality in the face of human limitations. [16]
Dutch books are closely related to the axioms of probability. [16] The Dutch book theorem holds that only credence assignments that do not follow the axioms of probability are vulnerable to Dutch books. The converse Dutch book theorem states that no credence assignment following these axioms is vulnerable to a Dutch book. [4] [16]
In the philosophy of science, confirmation refers to the relation between a piece of evidence and a hypothesis confirmed by it. [18] Confirmation theory is the study of confirmation and disconfirmation: how scientific hypotheses are supported or refuted by evidence. [19] Bayesian confirmation theory provides a model of confirmation based on the principle of conditionalization. [6] [18] A piece of evidence confirms a theory if the conditional probability of that theory relative to the evidence is higher than the unconditional probability of the theory by itself. [18] Expressed formally: . [6] If the evidence lowers the probability of the hypothesis then it disconfirms it. Scientists are usually not just interested in whether a piece of evidence supports a theory but also in how much support it provides. There are different ways how this degree can be determined. [18] The simplest version just measures the difference between the conditional probability of the hypothesis relative to the evidence and the unconditional probability of the hypothesis, i.e. the degree of support is . [4] The problem with measuring this degree is that it depends on how certain the theory already is prior to receiving the evidence. So if a scientist is already very certain that a theory is true then one further piece of evidence will not affect her credence much, even if the evidence would be very strong. [6] [4] There are other constraints for how an evidence measure should behave, for example, surprising evidence, i.e. evidence that had a low probability on its own, should provide more support. [4] [18] Scientists are often faced with the problem of having to decide between two competing theories. In such cases, the interest is not so much in absolute confirmation, or how much a new piece of evidence would support this or that theory, but in relative confirmation, i.e. in which theory is supported more by the new evidence. [6]
A well-known problem in confirmation theory is Carl Gustav Hempel's raven paradox. [20] [19] [18] Hempel starts by pointing out that seeing a black raven counts as evidence for the hypothesis that all ravens are black while seeing a green apple is usually not taken to be evidence for or against this hypothesis. The paradox consists in the consideration that the hypothesis "all ravens are black" is logically equivalent to the hypothesis "if something is not black, then it is not a raven". [18] So since seeing a green apple counts as evidence for the second hypothesis, it should also count as evidence for the first one. [6] Bayesianism allows that seeing a green apple supports the raven-hypothesis while explaining our initial intuition otherwise. This result is reached if we assume that seeing a green apple provides minimal but still positive support for the raven-hypothesis while spotting a black raven provides significantly more support. [6] [18] [20]
Coherence plays a central role in various epistemological theories, for example, in the coherence theory of truth or in the coherence theory of justification. [21] [22] It is often assumed that sets of beliefs are more likely to be true if they are coherent than otherwise. [1] For example, we would be more likely to trust a detective who can connect all the pieces of evidence into a coherent story. But there is no general agreement as to how coherence is to be defined. [1] [4] Bayesianism has been applied to this field by suggesting precise definitions of coherence in terms of probability, which can then be employed to tackle other problems surrounding coherence. [4] One such definition was proposed by Tomoji Shogenji, who suggests that the coherence between two beliefs is equal to the probability of their conjunction divided by the probabilities of each by itself, i.e. . [4] [23] Intuitively, this measures how likely it is that the two beliefs are true at the same time, compared to how likely this would be if they were neutrally related to each other. [23] The coherence is high if the two beliefs are relevant to each other. [4] Coherence defined this way is relative to a credence assignment. This means that two propositions may have high coherence for one agent and a low coherence for another agent due to the difference in prior probabilities of the agents' credences. [4]
Social epistemology studies the relevance of social factors for knowledge. [24] In the field of science, for example, this is relevant since individual scientists have to place their trust in some claimed discoveries of other scientists in order to progress. [1] The Bayesian approach can be applied to various topics in social epistemology. For example, probabilistic reasoning can be used in the field of testimony to evaluate how reliable a given report is. [6] In this way, it can be formally shown that witness reports that are probabilistically independent of each other provide more support than otherwise. [1] Another topic in social epistemology concerns the question of how to aggregate the beliefs of the individuals within a group to arrive at the belief of the group as a whole. [24] Bayesianism approaches this problem by aggregating the probability assignments of the different individuals. [6] [1]
In order to draw probabilistic inferences based on new evidence, it is necessary to already have a prior probability assigned to the proposition in question. [25] But this is not always the case: there are many propositions that the agent never considered and therefore lacks a credence for. This problem is usually solved by assigning a probability to the proposition in question in order to learn from the new evidence through conditionalization. [6] [26] The problem of priors concerns the question of how this initial assignment should be done. [25] Subjective Bayesians hold that there are no or few constraints besides probabilistic coherence that determine how we assign the initial probabilities. The argument for this freedom in choosing the initial credence is that the credences will change as we acquire more evidence and will converge on the same value after enough steps no matter where we start. [6] Objective Bayesians, on the other hand, assert that there are various constraints that determine the initial assignment. One important constraint is the principle of indifference. [5] [25] It states that the credences should be distributed equally among all the possible outcomes. [27] [10] For example, the agent wants to predict the color of balls drawn from an urn containing only red and black balls without any information about the ratio of red to black balls. [6] Applied to this situation, the principle of indifference states that the agent should initially assume that the probability to draw a red ball is 50%. This is due to symmetric considerations: it is the only assignment in which the prior probabilities are invariant to a change in label. [6] While this approach works for some cases it produces paradoxes in others. Another objection is that one should not assign prior probabilities based on initial ignorance. [6]
The norms of rationality according to the standard definitions of Bayesian epistemology assume logical omniscience: the agent has to make sure to exactly follow all the laws of probability for all her credences in order to count as rational. [28] [29] Whoever fails to do so is vulnerable to Dutch books and is therefore irrational. This is an unrealistic standard for human beings, as critics have pointed out. [6]
The problem of old evidence concerns cases in which the agent does not know at the time of acquiring a piece of evidence that it confirms a hypothesis but only learns about this supporting-relation later. [6] Normally, the agent would increase her belief in the hypothesis after discovering this relation. But this is not allowed in Bayesian confirmation theory since conditionalization can only happen upon a change of the probability of the evidential statement, which is not the case. [6] [30] For example, the observation of certain anomalies in the orbit of Mercury is evidence for the theory of general relativity. But this data had been obtained before the theory was formulated, thereby counting as old evidence. [30]
Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.
Epistemology is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Debates in (contemporary) epistemology are generally clustered around four core areas:
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.
Justification is the property of belief that qualifies it as knowledge rather than mere opinion. Epistemology is the study of reasons that someone holds a rationally admissible belief. Epistemologists are concerned with various epistemic features of belief, which include the ideas of warrant, knowledge, rationality, and probability, among others.
The raven paradox, also known as Hempel's paradox, Hempel's ravens, or rarely the paradox of indoor ornithology, is a paradox arising from the question of what constitutes evidence for the truth of a statement. Observing objects that are neither black nor ravens may formally increase the likelihood that all ravens are black even though, intuitively, these observations are unrelated.
In probability theory and statistics, Bayes' theorem, named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Fundamentally, Bayesian inference uses prior knowledge, in the form of a prior distribution in order to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
Rationality is the quality of being guided by or based on reason. In this regard, a person acts rationally if they have a good reason for what they do or a belief is rational if it is based on strong evidence. This quality can apply to an ability, as in a rational animal, to a psychological process, like reasoning, to mental states, such as beliefs and intentions, or to persons who possess these other forms of rationality. A thing that lacks rationality is either arational, if it is outside the domain of rational evaluation, or irrational, if it belongs to this domain but does not fulfill its standards.
Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with the scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.
In philosophical epistemology, there are two types of coherentism: the coherence theory of truth; and the coherence theory of justification.
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials. More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.
In gambling, economics, and the philosophy of probability, a Dutch book or lock is a set of odds and bets that ensures a guaranteed profit. It is generally used as a thought experiment to motivate Von Neumann–Morgenstern axioms or the axioms of probability by showing they are equivalent to philosophical coherence or Pareto efficiency.
Bastiaan Cornelis van Fraassen is a Dutch-American philosopher noted for his contributions to philosophy of science, epistemology and formal logic. He is a Distinguished Professor of Philosophy at San Francisco State University and the McCosh Professor of Philosophy Emeritus at Princeton University.
An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. In the epistemic regress, for example, a belief is justified because it is based on another belief that is justified. But this other belief is itself in need of one more justified belief for itself to be justified and so on. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress is vicious. There are different ways in which a regress can be vicious. The most serious form of viciousness involves a contradiction in the form of metaphysical impossibility. Other forms occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve. Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. While some philosophers have explicitly defended theories with infinite regresses, the more common strategy has been to reformulate the theory in question in a way that avoids the regress. One such strategy is foundationalism, which posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way. Another way is coherentism, which is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. Infinite regress arguments have been made in various areas of philosophy. Famous examples include the cosmological argument, Bradley's regress and regress arguments in epistemology.
Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.
Gregory Wheeler is an American logician, philosopher, and computer scientist, who specializes in formal epistemology. Much of his work has focused on imprecise probability. He is currently Professor of Philosophy and Computer Science at the Frankfurt School of Finance and Management, and has held positions at LMU Munich, Carnegie Mellon University, the Max Planck Institute for Human Development in Berlin, and the New University of Lisbon. He is a member of the PROGIC steering committee, the editorial boards of Synthese, and Minds and Machines, and was the editor-in-chief of Minds and Machines from 2011 to 2016. In 2019 he co-founded Exaloan AG, a financial technology company based in Frankfurt. He obtained a Ph.D. in philosophy and computer science from the University of Rochester under Henry Kyburg.
Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.
Evidence for a proposition is what supports the proposition. It is usually understood as an indication that the supported proposition is true. What role evidence plays and how it is conceived varies from field to field.
In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data.
Radical probabilism is a hypothesis in philosophy, in particular epistemology, and probability theory that holds that no facts are known for certain. That view holds profound implications for statistical inference. The philosophy is particularly associated with Richard Jeffrey who wittily characterised it with the dictum "It's probabilities all the way down."