Parameter

Last updated

A parameter (from the Ancient Greek παρά, para: "beside", "subsidiary"; and μέτρον, metron: "measure"), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.

Contents

Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, electronic musical composition.

In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'. [1]

Modelization

When a system is modeled by equations, the values that describe the system are called parameters. For example, in mechanics, the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization.

For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing).

Mathematical functions

Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring

;

Here, the variable x designates the function's argument, but a, b, and c are parameters that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-b logarithm by the formula

where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative .

In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power

,

defines a polynomial function of n (when k is considered a parameter), but is not a polynomial function of k (when n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as

as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying.

Sometimes it is useful to consider all functions with certain parameters as parametric family, i.e. as an indexed family of functions. Examples from probability theory are given further below.

Examples

W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal.

[Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner. You have changed a parameter"

Mathematical models

In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows:

We refer to the relations which supposedly describe a certain physical situation, as a model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into variables and parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters. [2]

Analytic geometry

In analytic geometry, curves are often given as the image of some function. The argument of the function is invariably called "the parameter". A circle of radius 1 centered at the origin can be specified in more than one form:

Hence these equations, which might be called functions elsewhere are in analytic geometry characterized as parametric equations and the independent variables are considered as parameters.

Mathematical analysis

In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form

In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration).

Statistics and econometrics

In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution.[ citation needed ]

In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn.

For example, the sample mean (estimator), denoted , can be used as an estimate of the mean parameter (estimand), denoted μ, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (σ): see Unbiased estimation of standard deviation.)

It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation.

Probability theory

These traces all represent Poisson distributions, but with different values for the parameter l Poisson pmf.svg
These traces all represent Poisson distributions, but with different values for the parameter λ

In probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is:

This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k1 occurrences, we plug it into the function to get . Without altering the system, we can take multiple samples, which will have a range of values of k, but the system is always characterized by the same λ.

For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k, and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase.

Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ².

In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution.

It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter.

Computer programming

In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter.

For example, in the definition of a function such as

y = f(x) = x + 2,

x is the formal parameter (the parameter) of the defined function.

When the function is evaluated for a given value, as in

f(3): or, y = f(3) = 3 + 2 = 5,

3 is the actual parameter (the argument) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.)

These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention.

Engineering

In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage isn't consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel.

"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal." [3]

The term can also be used in engineering contexts, however, as it is typically used in the physical sciences.

Environmental science

In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value.

Linguistics

Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework.

Logic

In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz, "Natural Deduction"; Paulson, "Designing a theorem prover"). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables .

Music

In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense, [4] but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.).

See also

Related Research Articles

Cauchy distribution Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

Median Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

Normal distribution Probability distribution

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

Variance Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. In other words, it measures how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

Negative binomial distribution Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of successes in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of failures occurs. For example, we can define rolling a 6 on a die as a failure, and rolling any other number as a success, and ask how many successful rolls will occur before we see the third failure. In such a case, the probability distribution of the number of non-6s that appear will be a negative binomial distribution. We could similarly use the negative binomial distribution to model the number of days a certain machine works before it breaks down.

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

Beta distribution Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for the response variable to have an error distribution other than the normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

The following is a glossary of terms used in the mathematical sciences statistics and probability.

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

Differential entropy is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

Data transformation (statistics)

In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.

Quantile function

In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. It is also called the percent-point function or inverse cumulative distribution function.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and Inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

Maximum spacing estimation

In statistics, maximum spacing estimation, or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model. The method requires maximization of the geometric mean of spacings in the data, which are the differences between the values of the cumulative distribution function at neighbouring data points.

In applied statistics, a variance-stabilizing transformation is a data transformation that is specifically chosen either to simplify considerations in graphical exploratory data analysis or to allow the application of simple regression-based or analysis of variance techniques.

Poisson distribution Discrete probability distribution

In probability theory and statistics, the Poisson distribution, named after French mathematician Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.

References

  1. "Home : Oxford English Dictionary". www.oed.com.
  2. Bard, Yonathan (1974). Nonlinear Parameter Estimation. New York: Academic Press. p. 11. ISBN   0-12-078250-2.
  3. Trimmer, John D. (1950). Response of Physical Systems. New York: Wiley. p. 13.
  4. Lansky, Paul & Perle, George (2001). "Parameter". In Root, Deane L. (ed.). The New Grove Dictionary of Music and Musicians . Oxford University Press.