In statistical hypothesis testing, a **uniformly most powerful** (**UMP**) **test** is a hypothesis test which has the **greatest power ** among all possible tests of a given size *α*. For example, according to the Neyman–Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses.

Let denote a random vector (corresponding to the measurements), taken from a parametrized family of probability density functions or probability mass functions , which depends on the unknown deterministic parameter . The parameter space is partitioned into two disjoint sets and . Let denote the hypothesis that , and let denote the hypothesis that . The binary test of hypotheses is performed using a test function with a reject region (a subset of measurement space).

meaning that is in force if the measurement and that is in force if the measurement . Note that is a disjoint covering of the measurement space.

A test function is UMP of size if for any other test function satisfying

we have

The Karlin–Rubin theorem can be regarded as an extension of the Neyman–Pearson lemma for composite hypotheses.^{ [1] } Consider a scalar measurement having a probability density function parameterized by a scalar parameter *θ*, and define the likelihood ratio . If is monotone non-decreasing, in , for any pair (meaning that the greater is, the more likely is), then the threshold test:

- where is chosen such that

is the UMP test of size *α* for testing

Note that exactly the same test is also UMP for testing

Although the Karlin-Rubin theorem may seem weak because of its restriction to scalar parameter and scalar measurement, it turns out that there exist a host of problems for which the theorem holds. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with

has a monotone non-decreasing likelihood ratio in the sufficient statistic , provided that is non-decreasing.

Let denote i.i.d. normally distributed -dimensional random vectors with mean and covariance matrix . We then have

which is exactly in the form of the exponential family shown in the previous section, with the sufficient statistic being

Thus, we conclude that the test

is the UMP test of size for testing vs.

Finally, we note that in general, UMP tests do not exist for vector parameters or for two-sided tests (a test in which one hypothesis lies on both sides of the alternative). The reason is that in these situations, the most powerful test of a given size for one possible value of the parameter (e.g. for where ) is different from the most powerful test of the same size for a different value of the parameter (e.g. for where ). As a result, no test is **uniformly** most powerful in these situations.

This article includes a list of general references, but it lacks sufficient corresponding inline citations .(November 2010) |

In physics, the **cross section** is a measure of the probability that a specific process will take place when some kind of radiant excitation intersects a localized phenomenon. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during a collision with an atomic nucleus. Cross section is typically denoted *σ* (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.

In integral calculus, an **elliptic integral** is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler. Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.

In mathematics, the **special unitary group** of degree *n*, denoted SU(*n*), is the Lie group of *n* × *n* unitary matrices with determinant 1.

In mathematics, the **Laplace operator** or **Laplacian** is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δ*f* (*p*) of a function *f* at a point *p* measures by how much the average value of *f* over small spheres or balls centered at *p* deviates from *f* (*p*).

In mathematics, a **Green's function** is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In probability and statistics, an **exponential family** is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term **exponential class** is sometimes used in place of "exponential family", or the older term **Koopman–Darmois family**. The terms "distribution" and "family" are often used loosely: specifically, *an* exponential family is a *set* of distributions, where the specific distribution varies with the parameter; however, a parametric *family* of distributions is often referred to as "*a* distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family. They are distinct because they possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

In statistics, the **Lehmann–Scheffé theorem** is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.

In mathematics, **theta functions** are special functions of several complex variables. They are important in many areas, including the theories of Abelian varieties and moduli spaces, and of quadratic forms. They have also been applied to soliton theory. When generalized to a Grassmann algebra, they also appear in quantum field theory.

In mathematics, the **Jacobi elliptic functions** are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by Carl Gustav Jakob Jacobi (1829). Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later.

In probability theory, a distribution is said to be **stable** if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be **stable** if its distribution is stable. The stable distribution family is also sometimes referred to as the **Lévy alpha-stable distribution**, after Paul Lévy, the first mathematician to have studied it.

In Bayesian probability, the **Jeffreys prior**, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:

In probability theory and statistics, the **characteristic function** of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In mathematics, **subharmonic** and **superharmonic** functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In statistics, the **monotone likelihood ratio property** is a property of the ratio of two probability density functions (PDFs). Formally, distributions *ƒ*(*x*) and *g*(*x*) bear the property if

A **ratio distribution** is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables *X* and *Y*, the distribution of the random variable *Z* that is formed as the ratio *Z* = *X*/*Y* is a *ratio distribution*.

In mathematics and statistics, a **circular mean** or **angular mean** is a mean designed for angles and similar cyclic quantities, such as daytimes, and fractional parts of real numbers. This is necessary since most of the usual means may not be appropriate on angle-like quantities. For example, the arithmetic mean of 0° and 360° is 180°, which is misleading because 360° equals 0° modulo a full cycle. As another example, the "average time" between 11 PM and 1 AM is either midnight or noon, depending on whether the two times are part of a single night or part of a single calendar day. The circular mean is one of the simplest examples of circular statistics and of statistics of non-Euclidean spaces.

**Landen's transformation** is a mapping of the parameters of an elliptic integral, useful for the efficient numerical evaluation of elliptic functions. It was originally due to John Landen and independently rediscovered by Carl Friedrich Gauss.

In statistics, **errors-in-variables models** or **measurement error models** are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.

A **product distribution** is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables *X* and *Y*, the distribution of the random variable *Z* that is formed as the product

**Computational anatomy (CA)** is a discipline within medical imaging focusing on the study of anatomical shape and form at the visible or gross anatomical scale of morphology. The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, including medical imaging, neuroscience, physics, probability, and statistics. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. The central focus of the sub-field of computational anatomy within medical imaging is mapping information across anatomical coordinate systems most often dense information measured within a magnetic resonance image (MRI). The introduction of flows into CA, which are akin to the equations of motion used in fluid dynamics, exploit the notion that dense coordinates in image analysis follow the Lagrangian and Eulerian equations of motion. In models based on Lagrangian and Eulerian flows of diffeomorphisms, the constraint is associated to topological properties, such as open sets being preserved, coordinates not crossing implying uniqueness and existence of the inverse mapping, and connected sets remaining connected. The use of diffeomorphic methods grew quickly to dominate the field of mapping methods post Christensen's original paper, with fast and symmetric methods becoming available.

- ↑ Casella, G.; Berger, R.L. (2008),
*Statistical Inference*, Brooks/Cole. ISBN 0-495-39187-5 (Theorem 8.3.17)

- Ferguson, T. S. (1967). "Sec. 5.2:
*Uniformly most powerful tests*".*Mathematical Statistics: A decision theoretic approach*. New York: Academic Press. - Mood, A. M.; Graybill, F. A.; Boes, D. C. (1974). "Sec. IX.3.2:
*Uniformly most powerful tests*".*Introduction to the theory of statistics*(3rd ed.). New York: McGraw-Hill. - L. L. Scharf,
*Statistical Signal Processing*, Addison-Wesley, 1991, section 4.7.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.