Bayesian experimental design

Last updated

Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations.

Contents

The theory of Bayesian experimental design is to a certain extent based on the theory for making optimal decisions under uncertainty. The aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g., the Shannon information or the negative of the variance) but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen.

Relations to more specialized optimal design theory

Linear theory

If the model is linear, the prior probability density function (PDF) is homogeneous and observational errors are normally distributed, the theory simplifies to the classical optimal experimental design theory.

Approximate normality

In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior probabilities will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters. [1] Caution must however be taken when applying this method, since approximate normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform prior probability.

Posterior distribution

In many cases, the posterior distribution is not available in closed form and has to be approximated using numerical methods. The most common approach is to use Markov chain Monte Carlo methods to generate samples from the posterior, which can then be used to approximate the expected utility.

Another approach is to use a variational Bayes approximation of the posterior, which can often be calculated in closed form. This approach has the advantage of being computationally more efficient than Monte Carlo methods, but the disadvantage that the approximation might not be very accurate.

Some authors proposed approaches that use the posterior predictive distribution to assess the effect of new measurements on prediction uncertainty, [2] [3] while others suggest maximizing the mutual information between parameters, predictions and potential new experiments. [4]

Mathematical formulation

Notation
parameters to be determined
observation or data
design
PDF for making observation , given parameter values and design
prior PDF
marginal PDF in observation space
  posterior PDF
  utility of the design
  utility of the experiment outcome after observation with design

Given a vector of parameters to determine, a prior probability over those parameters and a likelihood for making observation , given parameter values and an experiment design , the posterior probability can be calculated using Bayes' theorem

where is the marginal probability density in observation space

The expected utility of an experiment with design can then be defined

where is some real-valued functional of the posterior probability after making observation using an experiment design .

Gain in Shannon information as utility

Utility may be defined as the prior-posterior gain in Shannon information

Another possibility is to define the utility as

the Kullback–Leibler divergence of the prior from the posterior distribution. Lindley (1956) noted that the expected utility will then be coordinate-independent and can be written in two forms

of which the latter can be evaluated without the need for evaluating individual posterior probability for all possible observations . [5] It is worth noting that the second term on the second equation line will not depend on the design , as long as the observational uncertainty doesn't. On the other hand, the integral of in the first form is constant for all , so if the goal is to choose the design with the highest utility, the term need not be computed at all. Several authors have considered numerical techniques for evaluating and optimizing this criterion. [6] [7] Note that

the expected information gain being exactly the mutual information between the parameter θ and the observation y. An example of Bayesian design for linear dynamical model discrimination is given in Bania (2019). [8] Since was difficult to calculate, its lower bound has been used as a utility function. The lower bound is then maximized under the signal energy constraint. Proposed Bayesian design has been also compared with classical average D-optimal design. It was shown that the Bayesian design is superior to D-optimal design.

The Kelly criterion also describes such a utility function for a gambler seeking to maximize profit, which is used in gambling and information theory; Kelly's situation is identical to the foregoing, with the side information, or "private wire" taking the place of the experiment.

See also

Related Research Articles

The likelihood function is the joint probability of observed data viewed as a function of the parameters of a statistical model.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). It is one of several forms of causal notation. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter and a scale parameter .
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. The terms "distribution" and "family" are often used loosely: specifically, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family. They are distinct because they possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior probability contains everything there is to know about an uncertain proposition, given prior knowledge and a mathematical model describing the observations available at a particular time. After the arrival of new information, the current posterior probability may serve as the prior in another round of Bayesian updating.

<span class="mw-page-title-main">Loss function</span> Mathematical relation assigning a probability event to a cost

In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.

A prior probability distribution of an uncertain quantity, often simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

<span class="mw-page-title-main">Mechanism design</span> Field in game theory

Mechanism design is a field in economics and game theory that takes an objectives-first approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics in fields such as market design, auction theory and social choice theory to networked-systems.

In mathematical statistics, the Kullback–Leibler divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

<span class="mw-page-title-main">Thompson sampling</span>

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Bayesian hierarchical modelling is a statistical model written in multiple levels that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired.

Dynamic discrete choice (DDC) models, also known as discrete choice models of dynamic programming, model an agent's choices over discrete options that have future implications. Rather than assuming observed choices are the result of static utility maximization, observed choices in DDC models are assumed to result from an agent's maximization of the present value of utility, generalizing the utility theory upon which discrete choice models are based.

References

  1. An approach reviewed in Chaloner, Kathryn; Verdinelli, Isabella (1995), "Bayesian experimental design: a review" (PDF), Statistical Science, 10 (3): 273–304, doi: 10.1214/ss/1177009939
  2. Vanlier; Tiemann; Hilbers; van Riel (2012), "A Bayesian approach to targeted experiment design", Bioinformatics, 28 (8): 1136–1142, doi:10.1093/bioinformatics/bts092, PMC   3324513 , PMID   22368245
  3. Thibaut; Laloy; Hermans (2021), "A new framework for experimental design using Bayesian Evidential Learning: The case of wellhead protection area", Journal of Hydrology, 603: 126903, arXiv: 2105.05539 , Bibcode:2021JHyd..60326903T, doi:10.1016/j.jhydrol.2021.126903, hdl: 1854/LU-8759542 , S2CID   234469903
  4. Liepe; Filippi; Komorowski; Stumpf (2013), "Maximizing the Information Content of Experiments in Systems Biology", PLOS Computational Biology, 9 (1): e1002888, Bibcode:2013PLSCB...9E2888L, doi: 10.1371/journal.pcbi.1002888 , PMC   3561087 , PMID   23382663
  5. Lindley, D. V. (1956), "On a measure of information provided by an experiment", Annals of Mathematical Statistics, 27 (4): 986–1005, doi: 10.1214/aoms/1177728069
  6. van den Berg; Curtis; Trampert (2003), "Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments", Geophysical Journal International , 155 (2): 411–421, Bibcode:2003GeoJI.155..411V, doi:10.1046/j.1365-246x.2003.02048.x
  7. Ryan, K. J. (2003), "Estimating Expected Information Gains for Experimental Designs With Application to the Random Fatigue-Limit Model", Journal of Computational and Graphical Statistics, 12 (3): 585–603, doi:10.1198/1061860032012, S2CID   119889630
  8. Bania, P. (2019), "Bayesian Input Design for Linear Dynamical Model Discrimination", Entropy, 21 (4): 351, Bibcode:2019Entrp..21..351B, doi: 10.3390/e21040351 , PMC   7514835 , PMID   33267065

Further reading