# Linear discriminant analysis

Last updated

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

## Contents

LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. [1] [2] However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. the class label). [3] Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.

LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. [4] LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.

LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis. [5] [6]

Discriminant analysis is used when groups are known a priori (unlike in cluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure. [7] In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type.

## History

The original dichotomous discriminant analysis was developed by Sir Ronald Fisher in 1936. [8] It is different from an ANOVA or MANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership. [9]

## LDA for two classes

Consider a set of observations ${\displaystyle {\vec {x}}}$ (also called features, attributes, variables or measurements) for each sample of an object or event with known class ${\displaystyle y}$. This set of samples is called the training set. The classification problem is then to find a good predictor for the class ${\displaystyle y}$ of any sample of the same distribution (not necessarily from the training set) given only an observation ${\displaystyle {\vec {x}}}$. [10] :338

LDA approaches the problem by assuming that the conditional probability density functions ${\displaystyle p({\vec {x}}|y=0)}$ and ${\displaystyle p({\vec {x}}|y=1)}$ are both the normal distribution with mean and covariance parameters ${\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}$ and ${\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}$, respectively. Under this assumption, the Bayes optimal solution is to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that:

${\displaystyle ({\vec {x}}-{\vec {\mu }}_{0})^{T}\Sigma _{0}^{-1}({\vec {x}}-{\vec {\mu }}_{0})+\ln |\Sigma _{0}|-({\vec {x}}-{\vec {\mu }}_{1})^{T}\Sigma _{1}^{-1}({\vec {x}}-{\vec {\mu }}_{1})-\ln |\Sigma _{1}|\ >\ T}$

Without any further assumptions, the resulting classifier is referred to as quadratic discriminant analysis (QDA).

LDA instead makes the additional simplifying homoscedasticity assumption (i.e. that the class covariances are identical, so ${\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }$) and that the covariances have full rank. In this case, several terms cancel:

${\displaystyle {\vec {x}}^{T}\Sigma _{0}^{-1}{\vec {x}}={\vec {x}}^{T}\Sigma _{1}^{-1}{\vec {x}}}$
${\displaystyle {\vec {x}}^{T}{\Sigma _{i}}^{-1}{\vec {\mu }}_{i}={{\vec {\mu }}_{i}}^{T}{\Sigma _{i}}^{-1}{\vec {x}}}$ because ${\displaystyle \Sigma _{i}}$ is Hermitian

and the above decision criterion becomes a threshold on the dot product

${\displaystyle {\vec {w}}\cdot {\vec {x}}>c}$

for some threshold constant c, where

${\displaystyle {\vec {w}}=\Sigma ^{-1}({\vec {\mu }}_{1}-{\vec {\mu }}_{0})}$
${\displaystyle c={\vec {w}}\cdot {\frac {1}{2}}({\vec {\mu }}_{1}+{\vec {\mu }}_{0})}$

This means that the criterion of an input ${\displaystyle {\vec {x}}}$ being in a class ${\displaystyle y}$ is purely a function of this linear combination of the known observations.

It is often useful to see this conclusion in geometrical terms: the criterion of an input ${\displaystyle {\vec {x}}}$ being in a class ${\displaystyle y}$ is purely a function of projection of multidimensional-space point ${\displaystyle {\vec {x}}}$ onto vector ${\displaystyle {\vec {w}}}$ (thus, we only consider its direction). In other words, the observation belongs to ${\displaystyle y}$ if corresponding ${\displaystyle {\vec {x}}}$ is located on a certain side of a hyperplane perpendicular to ${\displaystyle {\vec {w}}}$. The location of the plane is defined by the threshold c.

## Assumptions

The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables. [7]

• Multivariate normality: Independent variables are normal for each level of the grouping variable. [9] [7]
• Homogeneity of variance/covariance (homoscedasticity): Variances among group variables are the same across levels of predictors. Can be tested with Box's M statistic. [9] It has been suggested, however, that linear discriminant analysis be used when covariances are equal, and that quadratic discriminant analysis may be used when covariances are not equal. [7]
• Multicollinearity: Predictive power can decrease with an increased correlation between predictor variables. [7]
• Independence: Participants are assumed to be randomly sampled, and a participant's score on one variable is assumed to be independent of scores on that variable for all other participants. [9] [7]

It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions, [11] and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated). [12]

## Discriminant functions

Discriminant analysis works by creating one or more linear combinations of predictors, creating a new latent variable for each function. These functions are called discriminant functions. The number of functions possible is either ${\displaystyle N_{g}-1}$ where ${\displaystyle N_{g}}$ = number of groups, or ${\displaystyle p}$ (the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions.

Given group ${\displaystyle j}$, with ${\displaystyle \mathbb {R} _{j}}$ sets of sample space, there is a discriminant rule such that if ${\displaystyle x\in \mathbb {R} _{j}}$, then ${\displaystyle x\in j}$. Discriminant analysis then, finds “good” regions of ${\displaystyle \mathbb {R} _{j}}$ to minimize classification error, therefore leading to a high percent correct classified in the classification table. [13]

Each function is given a discriminant score[ clarification needed ] to determine how well it predicts group placement.

• Structure Correlation Coefficients: The correlation between each predictor and the discriminant score of each function. This is a zero-order correlation (i.e., not corrected for the other predictors). [14]
• Standardized Coefficients: Each predictor's weight in the linear combination that is the discriminant function. Like in a regression equation, these coefficients are partial (i.e., corrected for the other predictors). Indicates the unique contribution of each predictor in predicting group assignment.
• Functions at Group Centroids: Mean discriminant scores for each grouping variable are given for each function. The farther apart the means are, the less error there will be in classification.

## Discrimination rules

• Maximum likelihood: Assigns x to the group that maximizes population (group) density. [15]
• Bayes Discriminant Rule: Assigns x to the group that maximizes ${\displaystyle \pi _{i}f_{i}(x)}$, where πi represents the prior probability of that classification, and ${\displaystyle f_{i}(x)}$ represents the population density. [15]
• Fisher's linear discriminant rule: Maximizes the ratio between SSbetween and SSwithin, and finds a linear combination of the predictors to predict group. [15]

## Eigenvalues

An eigenvalue in discriminant analysis is the characteristic root of each function.[ clarification needed ] It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates. [7] This however, should be interpreted with caution, as eigenvalues have no upper limit. [9] [7] The eigenvalue can be viewed as a ratio of SSbetween and SSwithin as in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of the IV [ clarification needed ]. [9] This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc..

## Effect size

Some suggest the use of eigenvalues as effect size measures, however, this is generally not supported. [9] Instead, the canonical correlation is the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio of SSbetween and SStotal. It is the correlation between groups and the function. [9] Another popular measure of effect size is the percent of variance[ clarification needed ] for each function. This is calculated by: (λx/Σλi) X 100 where λx is the eigenvalue for the function and Σλi is the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others. [9] Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement. [9] Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[ clarification needed ] [16]

## Canonical discriminant analysis for k classes

Canonical discriminant analysis (CDA) finds axes (k  1 canonical coordinates, k being the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimal k  1 space through the n-dimensional cloud of data that best separates (the projections in that space of) the k groups. See “Multiclass LDA” for details below.

## Fisher's linear discriminant

The terms Fisher's linear discriminant and LDA are often used interchangeably, although Fisher's original article [1] actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such as normally distributed classes or equal class covariances.

Suppose two classes of observations have means ${\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}$ and covariances ${\displaystyle \Sigma _{0},\Sigma _{1}}$. Then the linear combination of features ${\displaystyle {\vec {w}}\cdot {\vec {x}}}$ will have means ${\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{i}}$ and variances ${\displaystyle {\vec {w}}^{T}\Sigma _{i}{\vec {w}}}$ for ${\displaystyle i=0,1}$. Fisher defined the separation between these two distributions to be the ratio of the variance between the classes to the variance within the classes:

${\displaystyle S={\frac {\sigma _{\text{between}}^{2}}{\sigma _{\text{within}}^{2}}}={\frac {({\vec {w}}\cdot {\vec {\mu }}_{1}-{\vec {w}}\cdot {\vec {\mu }}_{0})^{2}}{{\vec {w}}^{T}\Sigma _{1}{\vec {w}}+{\vec {w}}^{T}\Sigma _{0}{\vec {w}}}}={\frac {({\vec {w}}\cdot ({\vec {\mu }}_{1}-{\vec {\mu }}_{0}))^{2}}{{\vec {w}}^{T}(\Sigma _{0}+\Sigma _{1}){\vec {w}}}}}$

This measure is, in some sense, a measure of the signal-to-noise ratio for the class labelling. It can be shown that the maximum separation occurs when

${\displaystyle {\vec {w}}\propto (\Sigma _{0}+\Sigma _{1})^{-1}({\vec {\mu }}_{1}-{\vec {\mu }}_{0})}$

When the assumptions of LDA are satisfied, the above equation is equivalent to LDA.

Be sure to note that the vector ${\displaystyle {\vec {w}}}$ is the normal to the discriminant hyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular to ${\displaystyle {\vec {w}}}$.

Generally, the data points to be discriminated are projected onto ${\displaystyle {\vec {w}}}$; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means, ${\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}$ and ${\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}$. In this case the parameter c in threshold condition ${\displaystyle {\vec {w}}\cdot {\vec {x}}>c}$ can be found explicitly:

${\displaystyle c={\vec {w}}\cdot {\frac {1}{2}}({\vec {\mu }}_{0}+{\vec {\mu }}_{1})={\frac {1}{2}}{\vec {\mu }}_{1}^{T}\Sigma _{1}^{-1}{\vec {\mu }}_{1}-{\frac {1}{2}}{\vec {\mu }}_{0}^{T}\Sigma _{0}^{-1}{\vec {\mu }}_{0}}$.

Otsu's method is related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes.

## Multiclass LDA

In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find a subspace which appears to contain all of the class variability. [17] This generalization is due to C. R. Rao. [18] Suppose that each of C classes has a mean ${\displaystyle \mu _{i}}$ and the same covariance ${\displaystyle \Sigma }$. Then the scatter between class variability may be defined by the sample covariance of the class means

${\displaystyle \Sigma _{b}={\frac {1}{C}}\sum _{i=1}^{C}(\mu _{i}-\mu )(\mu _{i}-\mu )^{T}}$

where ${\displaystyle \mu }$ is the mean of the class means. The class separation in a direction ${\displaystyle {\vec {w}}}$ in this case will be given by

${\displaystyle S={\frac {{\vec {w}}^{T}\Sigma _{b}{\vec {w}}}{{\vec {w}}^{T}\Sigma {\vec {w}}}}}$

This means that when ${\displaystyle {\vec {w}}}$ is an eigenvector of ${\displaystyle \Sigma ^{-1}\Sigma _{b}}$ the separation will be equal to the corresponding eigenvalue.

If ${\displaystyle \Sigma ^{-1}\Sigma _{b}}$ is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to the C  1 largest eigenvalues (since ${\displaystyle \Sigma _{b}}$ is of rank C  1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section.

If classification is required, instead of dimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (giving C(C  1)/2 classifiers in total), with the individual classifiers combined to produce a final classification.

## Incremental LDA

The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. [19] Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features. [20] In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules. [21] Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples. [19]

## Practical use

In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either the maximum likelihood estimate or the maximum a posteriori estimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct.

Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class. [4] In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use a pseudo inverse instead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned by ${\displaystyle \Sigma _{b}}$. [22] Another strategy to deal with small sample size is to use a shrinkage estimator of the covariance matrix, which can be expressed mathematically as

${\displaystyle \Sigma =(1-\lambda )\Sigma +\lambda I\,}$

where ${\displaystyle I}$ is the identity matrix, and ${\displaystyle \lambda }$ is the shrinkage intensity or regularisation parameter. This leads to the framework of regularized discriminant analysis [23] or shrinkage discriminant analysis. [24]

Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is the kernel Fisher discriminant.

LDA can be generalized to multiple discriminant analysis, where c becomes a categorical variable with N possible states, instead of only two. Analogously, if the class-conditional densities ${\displaystyle p({\vec {x}}\mid c=i)}$ are normal with shared covariances, the sufficient statistic for ${\displaystyle P(c\mid {\vec {x}})}$ are the values of N projections, which are the subspace spanned by the N means, affine projected by the inverse covariance matrix. These projections can be found by solving a generalized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details.

## Applications

In addition to the examples given below, LDA is applied in positioning and product management.

### Bankruptcy prediction

In bankruptcy prediction based on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA, Edward Altman's 1968 model is still a leading model in practical applications.

### Face recognition

In computerised face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces .

### Marketing

In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:

1. Formulate the problem and gather data—Identify the salient attributes consumers use to evaluate products in this category—Use quantitative marketing research techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes. The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is codified and input into a statistical program such as R, SPSS or SAS. (This step is the same as in Factor analysis).
2. Estimate the Discriminant Function Coefficients and determine the statistical significance and validity—Choose the appropriate discriminant analysis method. The direct method involves estimating the discriminant function so that all the predictors are assessed simultaneously. The stepwise method enters the predictors sequentially. The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS. The most common method used to test validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The estimation sample is used in constructing the discriminant function. The validation sample is used to construct a classification matrix which contains the number of correctly classified and incorrectly classified cases. The percentage of correctly classified cases is called the hit ratio.
3. Plot the results on a two dimensional map, define the dimensions, and interpret the results. The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space). The distance of products to each other indicate either how different they are. The dimensions must be labelled by the researcher. This requires subjective judgement and is often very challenging. See perceptual mapping.

### Biomedical studies

The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate and severe form. Then results of clinical and laboratory analyses are studied in order to reveal variables which are statistically different in studied groups. Using these variables, discriminant functions are built which help to objectively classify disease in a future patient into mild, moderate or severe form.

In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra, [25] to detect animal source of Escherichia coli studying its virulence factors [26] etc.

### Earth science

This method can be used to separate the alteration zones[ clarification needed ]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively. [27]

## Comparison to logistic regression

Discriminant function analysis is very similar to logistic regression, and both can be used to answer the same research questions. [9] Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression. [28] Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. [7] Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met. [8] [7]

## Linear discriminant in high dimension

Geometric anomalities in high dimension lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. [29] An important case of these blessing of dimensionality phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples. [30] These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution. [31] In particular, such theorems are proven for log-concave distributions including multidimensional normal distribution (the proof is based on the concentration inequalities for log-concave measures [32] ) and for product measures on a multidimensional cube (this is proven using Talagrand's concentration inequality for product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction for artificial intelligence systems in high dimension. [33]

## Related Research Articles

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.

In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances.

In statistics, the standard score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest, and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals.

The Mahalanobis distance is a measure of the distance between a point P and a distribution D, introduced by P. C. Mahalanobis in 1936. It is a multi-dimensional generalization of the idea of measuring how many standard deviations away P is from the mean of D. This distance is zero for P at the mean of D and grows as P moves away from the mean along each principal component axis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standard Euclidean distance in the transformed space. The Mahalanobis distance is thus unitless, scale-invariant, and takes into account the correlations of the data set.

In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.

In statistics, a quadratic classifier is a statistical classifier that uses a quadratic decision surface to separate measurements of two or more classes of objects or events. It is a more general version of the linear classifier.

Functional data analysis (FDA) is a branch of statistics that analyzes data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.

In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters.

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than typically considered in classical multivariate analysis. The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.

Linear belief functions are an extension of the Dempster–Shafer theory of belief functions to the case when variables of interest are continuous. Examples of such variables include financial asset prices, portfolio performance, and other antecedent and consequent variables. The theory was originally proposed by Arthur P. Dempster in the context of Kalman Filters and later was elaborated, refined, and applied to knowledge representation in artificial intelligence and decision making in finance and accounting by Liping Liu.

Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L2 that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

## References

1. Fisher, R. A. (1936). "The Use of Multiple Measurements in Taxonomic Problems" (PDF). Annals of Eugenics . 7 (2): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x. hdl:.
2. McLachlan, G. J. (2004). Discriminant Analysis and Statistical Pattern Recognition. Wiley Interscience. ISBN   978-0-471-69115-0. MR   1190469.
3. Analyzing Quantitative Data: An Introduction for Social Researchers, Debra Wetcher-Hendricks, p.288
4. Martinez, A. M.; Kak, A. C. (2001). "PCA versus LDA" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence . 23 (2): 228–233. doi:10.1109/34.908974.
5. Abdi, H. (2007) "Discriminant correspondence analysis." In: N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistic. Thousand Oaks (CA): Sage. pp. 270–275.
6. Perriere, G.; Thioulouse, J. (2003). "Use of Correspondence Discriminant Analysis to predict the subcellular location of bacterial proteins". Computer Methods and Programs in Biomedicine. 70 (2): 99–105. doi:10.1016/s0169-2607(02)00011-1. PMID   12507786.
7. BÖKEOĞLU ÇOKLUK, Ö, & BÜYÜKÖZTÜRK, Ş. (2008). Discriminant function analysis: Concept and application. Eğitim araştırmaları dergisi, (33), 73-92.
8. Cohen et al. Applied Multiple Regression/Correlation Analysis for the Behavioural Sciences 3rd ed. (2003). Taylor & Francis Group.
9. Green, S.B. Salkind, N. J. & Akey, T. M. (2008). Using SPSS for Windows and Macintosh: Analyzing and understanding data. New Jersey: Prentice Hall.
10. Venables, W. N.; Ripley, B. D. (2002). Modern Applied Statistics with S (4th ed.). Springer Verlag. ISBN   978-0-387-95457-8.
11. Lachenbruch, P. A. (1975). Discriminant analysis. NY: Hafner
12. Klecka, William R. (1980). Discriminant analysis. Quantitative Applications in the Social Sciences Series, No. 19. Thousand Oaks, CA: Sage Publications.
13. Hardle, W., Simar, L. (2007). Applied Multivariate Statistical Analysis. Springer Berlin Heidelberg. pp. 289–303.
14. Garson, G. D. (2008). Discriminant function analysis. https://web.archive.org/web/20080312065328/http://www2.chass.ncsu.edu/garson/pA765/discrim.htm.
15. Hardle, W., Simar, L. (2007). Applied Multivariate Statistical Analysis . Springer Berlin Heidelberg. pp. 289-303.
16. Israel, Steven A. (June 2006). "Performance Metrics: How and When". Geocarto International. 21 (2): 23–32. doi:10.1080/10106040608542380. ISSN   1010-6049. S2CID   122376081.
17. Garson, G. D. (2008). Discriminant function analysis. "Archived copy". Archived from the original on 2008-03-12. Retrieved 2008-03-04.{{cite web}}: CS1 maint: archived copy as title (link) .
18. Rao, R. C. (1948). "The utilization of multiple measurements in problems of biological classification". Journal of the Royal Statistical Society, Series B. 10 (2): 159–203. JSTOR   2983775.
19. Aliyari Ghassabeh, Youness; Rudzicz, Frank; Moghaddam, Hamid Abrishami (2015-06-01). "Fast incremental LDA feature extraction". Pattern Recognition. 48 (6): 1999–2012. Bibcode:2015PatRe..48.1999A. doi:10.1016/j.patcog.2014.12.012.
20. Chatterjee, C.; Roychowdhury, V.P. (1997-05-01). "On self-organizing algorithms and networks for class-separability features". IEEE Transactions on Neural Networks. 8 (3): 663–678. doi:10.1109/72.572105. ISSN   1045-9227. PMID   18255669.
21. Demir, G. K.; Ozmehmet, K. (2005-03-01). "Online Local Learning Algorithms for Linear Discriminant Analysis". Pattern Recognit. Lett. 26 (4): 421–431. Bibcode:2005PaReL..26..421D. doi:10.1016/j.patrec.2004.08.005. ISSN   0167-8655.
22. Yu, H.; Yang, J. (2001). "A direct LDA algorithm for high-dimensional data — with application to face recognition". Pattern Recognition. 34 (10): 2067–2069. Bibcode:2001PatRe..34.2067Y. CiteSeerX  . doi:10.1016/s0031-3203(00)00162-x.
23. Friedman, J. H. (1989). "Regularized Discriminant Analysis" (PDF). Journal of the American Statistical Association . 84 (405): 165–175. CiteSeerX  . doi:10.2307/2289860. JSTOR   2289860. MR   0999675.
24. Ahdesmäki, M.; Strimmer, K. (2010). "Feature selection in omics prediction problems using cat scores and false nondiscovery rate control". Annals of Applied Statistics. 4 (1): 503–519. arXiv:. doi:10.1214/09-aoas277. S2CID   2508935.
25. Preisner, O; Guiomar, R; Machado, J; Menezes, JC; Lopes, JA (2010). "Application of Fourier transform infrared spectroscopy and chemometrics for differentiation of Salmonella enterica serovar Enteritidis phage types". Appl Environ Microbiol. 76 (11): 3538–3544. Bibcode:2010ApEnM..76.3538P. doi:10.1128/aem.01589-09. PMC  . PMID   20363777.
26. David, DE; Lynne, AM; Han, J; Foley, SL (2010). "Evaluation of virulence factor profiling in the characterization of veterinary Escherichia coli isolates". Appl Environ Microbiol. 76 (22): 7509–7513. Bibcode:2010ApEnM..76.7509D. doi:10.1128/aem.00726-10. PMC  . PMID   20889790.
27. Tahmasebi, P.; Hezarkhani, A.; Mortazavi, M. (2010). "Application of discriminant analysis for alteration separation; sungun copper deposit, East Azerbaijan, Iran. Australian" (PDF). Journal of Basic and Applied Sciences. 6 (4): 564–576.
28. Trevor Hastie; Robert Tibshirani; Jerome Friedman. The Elements of Statistical Learning. Data Mining, Inference, and Prediction (second ed.). Springer. p. 128.
29. Kainen P.C. (1997) Utilizing geometric anomalies of high dimension: When complexity makes computation easier. In: Kárný M., Warwick K. (eds) Computer Intensive Methods in Control and Signal Processing: The Curse of Dimensionality, Springer, 1997, pp. 282–294.
30. Donoho, D., Tanner, J. (2009) Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing, Phil. Trans. R. Soc. A 367, 4273–4293.
31. Gorban, Alexander N.; Golubkov, Alexander; Grechuck, Bogdan; Mirkes, Evgeny M.; Tyukin, Ivan Y. (2018). "Correction of AI systems by linear discriminants: Probabilistic foundations". Information Sciences. 466: 303–322. arXiv:. doi:10.1016/j.ins.2018.07.040. S2CID   52876539.
32. Guédon, O., Milman, E. (2011) Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures, Geom. Funct. Anal. 21 (5), 1043–1068.
33. Gorban, Alexander N.; Makarov, Valeri A.; Tyukin, Ivan Y. (July 2019). "The unreasonable effectiveness of small neural ensembles in high-dimensional brain". Physics of Life Reviews. 29: 55–88. arXiv:. Bibcode:2019PhLRv..29...55G. doi:. PMID   30366739.