Parametric family

Last updated

In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters. [1]

Contents

Common examples are parametrized (families of) functions, probability distributions, curves, shapes, etc.[ citation needed ]

In probability and its applications

A graph of the probability density functions of several normal distributions (from the same parametric family). Probability distribution functions for normal distribution.svg
A graph of the probability density functions of several normal distributions (from the same parametric family).

For example, the probability density function fX of a random variable X may depend on a parameter θ. In that case, the function may be denoted to indicate the dependence on the parameter θ. θ is not a formal argument of the function as it is considered to be fixed. However, each different value of the parameter gives a different probability density function. Then the parametric family of densities is the set of functions , where Θ denotes the parameter space, the set of all possible values that the parameter θ can take. As an example, the normal distribution is a family of similarly-shaped distributions parametrized by their mean and their variance. [2] [3]

In decision theory, two-moment decision models can be applied when the decision-maker is faced with random variables drawn from a location-scale family of probability distributions.[ citation needed ]

In algebra and its applications

A three-dimensional graph of a Cobb-Douglas production function. Cobb douglas.png
A three-dimensional graph of a Cobb–Douglas production function.

In economics, the Cobb–Douglas production function is a family of production functions parametrized by the elasticities of output with respect to the various factors of production.[ citation needed ]

Graphs of several quadratic polynomials, varying each of the three coefficients independently. Quadratic equation coefficients.png
Graphs of several quadratic polynomials, varying each of the three coefficients independently.

In algebra, the quadratic equation, for example, is actually a family of equations parametrized by the coefficients of the variable and of its square and by the constant term.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Estimator</span> Rule for calculating an estimate of a given quantity based on observed data

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule, the quantity of interest and its result are distinguished. For example, the sample mean is a commonly used estimator of the population mean.

In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter , which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways:

A parameter, generally, is any characteristic that can help in defining or classifying a particular system. That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data. A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model.

The likelihood function is the joint probability of observed data viewed as a function of the parameters of a statistical model.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. The terms "distribution" and "family" are often used loosely: specifically, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family. They are distinct because they possess a variety of desirable properties, most importantly the existence of a sufficient statistic.

In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution.

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.

In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis.

In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

Twisting properties in general terms are associated with the properties of samples that identify with statistics that are suitable for exchange.

In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate different probability distributions of the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called the identification conditions.

In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

Parametric programming is a type of mathematical optimization, where the optimization problem is solved as a function of one or multiple parameters. Developed in parallel to sensitivity analysis, its earliest mention can be found in a thesis from 1952. Since then, there have been considerable developments for the cases of multiple parameters, presence of integer variables as well as nonlinearities.

References

  1. "All of Nonparametric Statistics". Springer Texts in Statistics. 2006. doi:10.1007/0-387-30623-4.
  2. Mukhopadhyay, Nitis (2000). Probability and Statistical Inference. United States of America: Marcel Dekker, Inc. pp. 282–283, 341. ISBN   0-8247-0379-0.
  3. "Parameter of a distribution". www.statlect.com. Retrieved 2021-08-04.