Factor analysis

Last updated

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors, plus "error" terms. Factor analysis aims to find independent latent variables.

Statistics Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.

Variance Statistical measure

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

In elementary mathematics, a variable is a symbol, commonly a single letter, that represents a number, called the value of the variable, which is either arbitrary, not fully specified, or unknown. Making algebraic computations with variables as if they were explicit numbers allows one to solve a range of problems in a single computation. A typical example is the quadratic formula, which allows one to solve every quadratic equation by simply substituting the numeric values of the coefficients of the given equation for the variables that represent them.

Contents

It is a theory used in machine learning and related to data mining. The theory behind factor analytic methods is that the information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Factor analysis is commonly used in biology, psychometrics, personality theories, marketing, product management, operations research, and finance. It may help to deal with data sets where there are large numbers of observed variables that are thought to reflect a smaller number of underlying/latent variables. It is one of the most commonly used inter-dependency techniques and is used when the relevant set of variables shows a systematic inter-dependence and the objective is to find out the latent factors that create a commonality.

Machine learning Scientific study of algorithms and statistical models that computer systems use to perform tasks without explicit instructions

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.

Data mining computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems; interdisciplinary subfield of computer science

Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

Psychometrics is a field of study concerned with the theory and technique of psychological measurement. As defined by the US National Council on Measurement in Education (NCME), psychometrics refers to psychological measurement. Generally, it refers to the field in psychology and education that is devoted to testing, measurement, assessment, and related activities.

Factor analysis is related to principal component analysis (PCA), but the two are not identical. [1] There has been significant controversy in the field over differences between the two techniques (see section on exploratory factor analysis versus principal components analysis below). PCA can be considered as a more basic version of exploratory factor analysis (EFA) that was developed in the early days prior to the advent of high-speed computers. Both PCA and factor analysis aim to reduce the dimensionality of a set of data, but the approaches taken to do so are different for the two techniques. Factor analysis is clearly designed with the objective to identify certain unobservable factors from the observed variables, whereas PCA does not directly address this objective; at best, PCA provides an approximation to the required factors. [2] From the point of view of exploratory analysis, the eigenvalues of PCA are inflated component loadings, i.e., contaminated with error variance. [3] [4] [5] [6] [7] [8]

Principal component analysis conversion of a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance, and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.

In multivariate statistics, exploratory factor analysis (EFA) is a statistical method used to uncover the underlying structure of a relatively large set of variables. EFA is a technique within factor analysis whose overarching goal is to identify the underlying relationships between measured variables. It is commonly used by researchers when developing a scale and serves to identify a set of latent constructs underlying a battery of measured variables. It should be used when the researcher has no a priori hypothesis about factors or patterns of measured variables. Measured variables are any one of several attributes of people that may be observed and measured. Examples of a measured variables could be the physical height, weight, and pulse rate of a human being. Usually, researchers would have large number of measured variables, which are assumed to be related to a smaller number of "unobserved" factors. Researchers must carefully consider the number of measured variables to include in the analysis. EFA procedures are more accurate when each factor is represented by multiple measured variables in the analysis.

Statistical model

Definition

Suppose we have a set of observable random variables, with means .

Suppose for some unknown constants and unobserved random variables (called "common factors," because they influence all the observed random variables), where and where , we have

Here, the are unobserved stochastic error terms with zero mean and finite variance, which may not be the same for all .

In matrix terms, we have

If we have observations, then we will have the dimensions , , and . Each column of and denotes values for one particular observation, and matrix does not vary across observations.

Also we will impose the following assumptions on :

  1. and are independent.
  2. (E is Expectation)
  3. (to make sure that the factors are uncorrelated).

Any solution of the above set of equations following the constraints for is defined as the factors, and as the loading matrix.

Suppose . Then note that from the conditions just imposed on , we have

or

or

Note that for any orthogonal matrix , if we set and , the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is unique only up to an orthogonal transformation.

An orthogonal matrix is a square matrix whose columns and rows are orthogonal unit vectors, i.e.

In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair u, v of elements of V, we have

Example

Suppose a psychologist has the hypothesis that there are two kinds of intelligence, "verbal intelligence" and "mathematical intelligence", neither of which is directly observed. Evidence for the hypothesis is sought in the examination scores from each of 10 different academic fields of 1000 students. If each student is chosen randomly from a large population, then each student's 10 scores are random variables. The psychologist's hypothesis may say that for each of the 10 academic fields, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is some constant times their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a combination of those two "factors". The numbers for a particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the hypothesis to be the same for all intelligence level pairs, and are called "factor loading" for this subject. [ clarification needed ] For example, the hypothesis may hold that the average student's aptitude in the field of astronomy is

Evidence Material supporting an assertion

Evidence, broadly construed, is anything presented in support of an assertion. This support may be strong or weak. The strongest type of evidence is that which provides direct proof of the truth of an assertion. At the other extreme is evidence that is merely consistent with an assertion but does not rule out other, contradictory assertions, as in circumstantial evidence.

In mathematics, the adjective constant means non-varying. The noun constant may have two different meanings. It may refer to a fixed and well-defined number or other mathematical object. The term mathematical constant is sometimes used to distinguish this meaning from the other one. A constant may also refer to a constant function or its value. Such a constant is commonly represented by a variable which does not depend on the main variable(s) of the studied problem. This is the case, for example, for a constant of integration which is an arbitrary constant function added to a particular antiderivative to get all the antiderivatives of the given function.

{10 × the student's verbal intelligence} + {6 × the student's mathematical intelligence}.

The numbers 10 and 6 are the factor loadings associated with astronomy. Other academic subjects may have different factor loadings.

Two students assumed to have identical degrees of the latent, unmeasured traits of verbal and mathematical intelligence may have different measured aptitudes in astronomy because individual aptitudes differ from average aptitudes and because of measurement error itself. Such differences make up what is collectively called the "error" — a statistical term that means the amount by which an individual, as measured, differs from what is average for or predicted by his or her levels of intelligence (see errors and residuals in statistics).

The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data.

Mathematical model of the same example

In the following, matrices will be indicated by indexed variables. "Subject" indices will be indicated using letters a,b and c, with values running from 1 to which is equal to 10 in the above example. "Factor" indices will be indicated using letters p, q and r, with values running from 1 to which is equal to 2 in the above example. "Instance" or "sample" indices will be indicated using letters i,j and k, with values running from 1 to . In the example above, if a sample of students responded to the questions, the ith student's score for the ath question are given by . The purpose of factor analysis is to characterize the correlations between the variables of which the are a particular instance, or set of observations. In order for the variables to be on equal footing, they are normalized:

where the sample mean is:

and the sample variance is given by:

The factor analysis model for this particular sample is then:

or, more succinctly:

where

In matrix notation, we have

Observe that by doubling the scale on which "verbal intelligence"—the first component in each column of F—is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of the factors for verbal intelligence is 1. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors are uncorrelated with each other. In other words:

where is the Kronecker delta (0 when and 1 when ).The errors are assumed to be independent of the factors:

Note that, since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument.

The values of the loadings L, the averages μ, and the variances of the "errors" ε must be estimated given the observed data X and F (the assumption about the levels of the factors is fixed for a given F). The "fundamental theorem" may be derived from the above conditions:

The term on the left is the (a,b) term of the correlation matrix (an matrix) of the observed data, and its diagonal elements will be 1's. The last term on the right will be a diagonal matrix with terms less than unity. The first term on the right is the "reduced correlation matrix" and will be equal to the correlation matrix except for its diagonal values which will be less than unity. These diagonal elements of the reduced correlation matrix are called "communalities" (which represent the fraction of the variance in the observed variable that is accounted for by the factors):

The sample data will not, of course, exactly obey the fundamental equation given above due to sampling errors, inadequacy of the model, etc. The goal of any analysis of the above model is to find the factors and loadings which, in some sense, give a "best fit" to the data. In factor analysis, the best fit is defined as the minimum of the mean square error in the off-diagonal residuals of the correlation matrix: [9]

This is equivalent to minimizing the off-diagonal components of the error covariance which, in the model equations have expected values of zero. This is to be contrasted with principal component analysis which seeks to minimize the mean square error of all residuals. [9] Before the advent of high speed computers, considerable effort was devoted to finding approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably by yielding a known reduced correlation matrix. This was then used to estimate the factors and the loadings. With the advent of high-speed computers, the minimization problem can be solved iteratively with adequate speed, and the communalities are calculated in the process, rather than being needed beforehand. The MinRes algorithm is particularly suited to this problem, but is hardly the only iterative means of finding a solution.

If the solution factors are allowed to be correlated (as in oblimin rotation, for example), then the corresponding mathematical model uses skew coordinates rather than orthogonal coordinates.

Geometric interpretation

Geometric interpretation of Factor Analysis parameters for 3 respondents to question "a". The "answer" is represented by the unit vector
z
a
{\displaystyle \mathbf {z} _{a}}
, which is projected onto a plane defined by two orthonormal vectors
F
1
{\displaystyle \mathbf {F} _{1}}
and
F
2
{\displaystyle \mathbf {F} _{2}}
. The projection vector is
z
^
a
{\displaystyle {\hat {\mathbf {z} }}_{a}}
and the error
e
a
{\displaystyle {\boldsymbol {\varepsilon }}_{a}}
is perpendicular to the plane, so that
z
a
=
z
^
a
+
e
a
{\displaystyle \mathbf {z} _{a}={\hat {\mathbf {z} }}_{a}+{\boldsymbol {\varepsilon }}_{a}}
. The projection vector
z
^
a
{\displaystyle {\hat {\mathbf {z} }}_{a}}
may be represented in terms of the factor vectors as
z
^
a
=
l
a
1
F
1
+
l
a
2
F
2
{\displaystyle {\hat {\mathbf {z} }}_{a}=\ell _{a1}\mathbf {F} _{1}+\ell _{a2}\mathbf {F} _{2}}
. The square of the length of the projection vector is the communality:
|
z
^
a
|
2
=
h
a
2
{\displaystyle |{\hat {\mathbf {z} }}_{a}|^{2}=h_{a}^{2}}
. If another data vector
z
b
{\displaystyle \mathbf {z} _{b}}
were plotted, the cosine of the angle between
z
a
{\displaystyle \mathbf {z} _{a}}
and
z
b
{\displaystyle \mathbf {z} _{b}}
would be
r
a
b
{\displaystyle r_{ab}}
: the (a,b) entry in the correlation matrix. (Adapted from Harman Fig. 4.3) FactorPlot.svg
Geometric interpretation of Factor Analysis parameters for 3 respondents to question "a". The "answer" is represented by the unit vector , which is projected onto a plane defined by two orthonormal vectors and . The projection vector is and the error is perpendicular to the plane, so that . The projection vector may be represented in terms of the factor vectors as . The square of the length of the projection vector is the communality: . If another data vector were plotted, the cosine of the angle between and would be  : the (a,b) entry in the correlation matrix. (Adapted from Harman Fig. 4.3)

The parameters and variables of factor analysis can be given a geometrical interpretation. The data (), the factors () and the errors () can be viewed as vectors in an -dimensional Euclidean space (sample space), represented as , and respectively. Since the data are standardized, the data vectors are of unit length (). The factor vectors define an -dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation

and the independence of the factors and the errors: . In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by

and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense, so it doesn't matter how the factor vectors which define this hyperplane are chosen, as long as they are independent and lie in the hyperplane. We are free to specify them as both orthogonal and normal () with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument.

The data vectors have unit length. The correlation matrix for the data is given by . The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectors and . The diagonal elements will clearly be 1's and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as

.

The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components

is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that

The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since the are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities":

Large values of the communalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The mean values of the factors must also be constrained to be zero, from which it follows that the mean values of the errors will also be zero.

Practical implementation

Types of factor analysis

Exploratory factor analysis (EFA) is used to identify complex interrelationships among items and group items that are part of unified concepts. [10] The researcher makes no a priori assumptions about relationships among factors. [10]

Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors. [10] CFA uses structural equation modeling to test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables. [10] Structural equation modeling approaches can accommodate measurement error, and are less restrictive than least-squares estimation. [10] Hypothesized models are tested against actual data, and the analysis would demonstrate loadings of observed variables on the latent variables (factors), as well as the correlation between the latent variables. [10]

Types of factor extraction

Principal component analysis (PCA) is a widely used method for factor extraction, which is the first phase of EFA. [10] Factor weights are computed to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left. [10] The factor model must then be rotated for analysis. [10]

Canonical factor analysis, also called Rao's canonical factoring, is a different method of computing the same model as PCA, which uses the principal axis method. Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data.

Common factor analysis, also called principal factor analysis (PFA) or principal axis factoring (PAF), seeks the least number of factors which can account for the common variance (correlation) of a set of variables.

Image factoring is based on the correlation matrix of predicted variables rather than actual variables, where each variable is predicted from the others using multiple regression.

Alpha factoring is based on maximizing the reliability of factors, assuming variables are randomly sampled from a universe of variables. All other methods assume cases to be sampled and variables fixed.

Factor regression model is a combinatorial model of factor model and regression model; or alternatively, it can be viewed as the hybrid factor model, [11] whose factors are partially known.

Terminology

Factor loadings: Commonality is the square of standardized outer loading of an item. Analogous to Pearson's r, the squared factor loading is the percent of variance in that indicator variable explained by the factor. To get the percent of variance in all the variables accounted for by each factor, add the sum of the squared factor loadings for that factor (column) and divide by the number of variables. (Note the number of variables equals the sum of their variances as the variance of a standardized variable is 1.) This is the same as dividing the factor's eigenvalue by the number of variables.

Interpreting factor loadings: By one rule of thumb in confirmatory factor analysis, loadings should be .7 or higher to confirm that independent variables identified a priori are represented by a particular factor, on the rationale that the .7 level corresponds to about half of the variance in the indicator being explained by the factor. However, the .7 standard is a high one and real-life data may well not meet this criterion, which is why some researchers, particularly for exploratory purposes, will use a lower level such as .4 for the central factor and .25 for other factors. In any event, factor loadings must be interpreted in the light of theory, not by arbitrary cutoff levels.

In oblique rotation, one may examine both a pattern matrix and a structure matrix. The structure matrix is simply the factor loading matrix as in orthogonal rotation, representing the variance in a measured variable explained by a factor on both a unique and common contributions basis. The pattern matrix, in contrast, contains coefficients which just represent unique contributions. The more factors, the lower the pattern coefficients as a rule since there will be more common contributions to variance explained. For oblique rotation, the researcher looks at both the structure and pattern coefficients when attributing a label to a factor. Principles of oblique rotation can be derived from both cross entropy and its dual entropy. [12]

Communality: The sum of the squared factor loadings for all factors for a given variable (row) is the variance in that variable accounted for by all the factors, and this is called the communality. The communality measures the percent of variance in a given variable explained by all the factors jointly and may be interpreted as the reliability of the indicator in the context of the factors being posited.

Spurious solutions: If the communality exceeds 1.0, there is a spurious solution, which may reflect too small a sample or the choice to extract too many or too few factors.

Uniqueness of a variable: The variability of a variable minus its communality.

Eigenvalues/characteristic roots: Eigenvalues measure the amount of variation in the total sample accounted for by each factor. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If a factor has a low eigenvalue, then it is contributing little to the explanation of variances in the variables and may be ignored as less important than the factors with higher eigenvalues.

Extraction sums of squared loadings: Initial eigenvalues and eigenvalues after extraction (listed by SPSS as "Extraction Sums of Squared Loadings") are the same for PCA extraction, but for other extraction methods, eigenvalues after extraction will be lower than their initial counterparts. SPSS also prints "Rotation Sums of Squared Loadings" and even for PCA, these eigenvalues will differ from initial and extraction eigenvalues, though their total will be the same.

Factor scores (also called component scores in PCA): are the scores of each case (row) on each factor (column). To compute the factor score for a given case for a given factor, one takes the case's standardized score on each variable, multiplies by the corresponding loadings of the variable for the given factor, and sums these products. Computing factor scores allows one to look for factor outliers. Also, factor scores may be used as variables in subsequent modeling. (Explained from PCA not from Factor Analysis perspective).

Criteria for determining the number of factors

Researchers wish to avoid such subjective or arbitrary criteria for factor retention as "it made sense to me". A number of objective methods have been developed to solve this problem, allowing users to determine an appropriate range of solutions to investigate. Methods may not agree. For instance, the parallel analysis may suggest 5 factors while Velicer's MAP suggests 6, so the researcher may request both 5 and 6-factor solutions and discuss each in terms of their relation to external data and theory.

Modern criteria

Horn's parallel analysis (PA): A Monte-Carlo based simulation method that compares the observed eigenvalues with those obtained from uncorrelated normal variables. A factor or component is retained if the associated eigenvalue is bigger than the 95th percentile of the distribution of eigenvalues derived from the random data. PA is one of the most recommended rules for determining the number of components to retain, [13] [ citation needed ] but many programs fail to include this option (a notable exception being R). [14] However, Formann provided both theoretical and empirical evidence that its application might not be appropriate in many cases since its performance is considerably influenced by sample size, item discrimination, and type of correlation coefficient. [15]

Velicer's (1976) MAP test [16] as described by Courtney (2013) [17] “involves a complete principal components analysis followed by the examination of a series of matrices of partial correlations” (p. 397 (though note that this quote does not occur in Velicer (1976) and the cited page number is outside the pages of the citation). The squared correlation for Step “0” (see Figure 4) is the average squared off-diagonal correlation for the unpartialed correlation matrix. On Step 1, the first principal component and its associated items are partialed out. Thereafter, the average squared off-diagonal correlation for the subsequent correlation matrix is then computed for Step 1. On Step 2, the first two principal components are partialed out and the resultant average squared off-diagonal correlation is again computed. The computations are carried out for k minus one step (k representing the total number of variables in the matrix). Thereafter, all of the average squared correlations for each step are lined up and the step number in the analyses that resulted in the lowest average squared partial correlation determines the number of components or factors to retain. [16] By this method, components are maintained as long as the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance. Although methodologically akin to principal components analysis, the MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies. [18] [19] [20] This procedure is made available through SPSS's user interface. See Courtney (2013) [17] for guidance.

Older methods

Kaiser criterion: The Kaiser rule is to drop all components with eigenvalues under 1.0 – this being the eigenvalue equal to the information accounted for by an average single item. The Kaiser criterion is the default in SPSS and most statistical software but is not recommended when used as the sole cut-off criterion for estimating the number of factors as it tends to over-extract factors. [21] A variation of this method has been created where a researcher calculates confidence intervals for each eigenvalue and retains only factors which have the entire confidence interval greater than 1.0. [18] [22]

Scree plot: [23] The Cattell scree test plots the components as the X-axis and the corresponding eigenvalues as the Y-axis. As one moves to the right, toward later components, the eigenvalues drop. When the drop ceases and the curve makes an elbow toward less steep decline, Cattell's scree test says to drop all further components after the one starting at the elbow. This rule is sometimes criticised for being amenable to researcher-controlled "fudging". That is, as picking the "elbow" can be subjective because the curve has multiple elbows or is a smooth curve, the researcher may be tempted to set the cut-off at the number of factors desired by their research agenda.[ citation needed ]

Variance explained criteria: Some researchers simply use the rule of keeping enough factors to account for 90% (sometimes 80%) of the variation. Where the researcher's goal emphasizes parsimony (explaining variance with as few factors as possible), the criterion could be as low as 50%.

Rotation methods

The unrotated output maximizes variance accounted for by the first and subsequent factors, and forces the factors to be orthogonal. This data-compression comes at the cost of having most items load on the early factors, and usually, of having many items load substantially on more than one factor. Rotation serves to make the output more understandable, by seeking so-called "Simple Structure": A pattern of loadings where each item loads strongly on only one of the factors, and much more weakly on the other factors. Rotations can be orthogonal or oblique (allowing the factors to correlate).

Varimax rotation is an orthogonal rotation of the factor axes to maximize the variance of the squared loadings of a factor (column) on all the variables (rows) in a factor matrix, which has the effect of differentiating the original variables by extracted factor. Each factor will tend to have either large or small loadings of any particular variable. A varimax solution yields results which make it as easy as possible to identify each variable with a single factor. This is the most common rotation option. However, the orthogonality (i.e., independence) of factors is often an unrealistic assumption. Oblique rotations are inclusive of orthogonal rotation, and for that reason, oblique rotations are a preferred method. Allowing for factors that are correlated with one another is especially applicable in psychometric research, since attitudes, opinions, and intellectual abilities tend to be correlated, and since it would be unrealistic in many situations to assume otherwise. [24]

Quartimax rotation is an orthogonal alternative which minimizes the number of factors needed to explain each variable. This type of rotation often generates a general factor on which most variables are loaded to a high or medium degree. Such a factor structure is usually not helpful to the research purpose.

Equimax rotation is a compromise between varimax and quartimax criteria.

Direct oblimin rotation is the standard method when one wishes a non-orthogonal (oblique) solution – that is, one in which the factors are allowed to be correlated. This will result in higher eigenvalues but diminished interpretability of the factors. See below.[ clarification needed ]

Promax rotation is an alternative non-orthogonal (oblique) rotation method which is computationally faster than the direct oblimin method and therefore is sometimes used for very large datasets.

In psychometrics

History

Charles Spearman pioneered the use of factor analysis in the field of psychology and is sometimes credited with the invention of factor analysis. He discovered that school children's scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a general mental ability, or g, underlies and shapes human cognitive performance. His postulate now enjoys broad support in the field of intelligence research, where it is known as the g theory.

In Q methodology, Stephenson, a student of Spearman, distinguish R factor analysis, oriented toward the study of inter-individual differences, and Q factor analysis oriented toward subjective intra-individual differences. [25] [26]

Raymond Cattell expanded on Spearman's idea of a two-factor theory of intelligence after performing his own tests and factor analysis. He used a multi-factor theory to explain intelligence. Cattell's theory addressed alternative factors in intellectual development, including motivation and psychology. Cattell also developed several mathematical methods for adjusting psychometric graphs, such as his "scree" test and similarity coefficients. His research led to the development of his theory of fluid and crystallized intelligence, as well as his 16 Personality Factors theory of personality. Cattell was a strong advocate of factor analysis and psychometrics. He believed that all theory should be derived from research, which supports the continued use of empirical observation and objective testing to study human intelligence.

Applications in psychology

Factor analysis is used to identify "factors" that explain a variety of results on different tests. For example, intelligence research found that people who get a high score on a test of verbal ability are also good on other tests that require verbal abilities. Researchers explained this by using factor analysis to isolate one factor, often called crystallized intelligence or verbal intelligence, which represents the degree to which someone is able to solve problems involving verbal skills.

Factor analysis in psychology is most often associated with intelligence research. However, it also has been used to find factors in a broad range of domains such as personality, attitudes, beliefs, etc. It is linked to psychometrics, as it can assess the validity of an instrument by finding if the instrument indeed measures the postulated factors.

Advantages

Disadvantages

Exploratory factor analysis versus principal components analysis

While exploratory factor analysis and principal component analysis are treated as synonymous techniques in some fields of statistics, this has been criticised (e.g. Fabrigar et al., 1999; [29] Suhr, 2009 [30] ). In factor analysis, the researcher makes the assumption that an underlying causal model exists, whereas PCA is simply a variable reduction technique. [31] Researchers have argued that the distinctions between the two techniques may mean that there are objective benefits for preferring one over the other based on the analytic goal. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results. Factor analysis has been used successfully where adequate understanding of the system permits good initial model formulations. Principal component analysis employs a mathematical transformation to the original data with no assumptions about the form of the covariance matrix. The aim of PCA is to determine a few linear combinations of the original variables that can be used to summarize the data set without losing much information. [32]

Arguments contrasting PCA and EFA

Fabrigar et al. (1999) [29] address a number of reasons used to suggest that principal components analysis is not equivalent to factor analysis:

  1. It is sometimes suggested that principal components analysis is computationally quicker and requires fewer resources than factor analysis. Fabrigar et al. suggest that the ready availability of computer resources have rendered this practical concern irrelevant.
  2. PCA and factor analysis can produce similar results. This point is also addressed by Fabrigar et al.; in certain cases, whereby the communalities are low (e.g., .40), the two techniques produce divergent results. In fact, Fabrigar et al. argue that in cases where the data correspond to assumptions of the common factor model, the results of PCA are inaccurate results.
  3. There are certain cases where factor analysis leads to 'Heywood cases'. These encompass situations whereby 100% or more of the variance in a measured variable is estimated to be accounted for by the model. Fabrigar et al. suggest that these cases are actually informative to the researcher, indicating a misspecified model or a violation of the common factor model. The lack of Heywood cases in the PCA approach may mean that such issues pass unnoticed.
  4. Researchers gain extra information from a PCA approach, such as an individual's score on a certain component – such information is not yielded from factor analysis. However, as Fabrigar et al. contend, the typical aim of factor analysis – i.e. to determine the factors accounting for the structure of the correlations between measured variables – does not require knowledge of factor scores and thus this advantage is negated. It is also possible to compute factor scores from a factor analysis.

Variance versus covariance

Factor analysis takes into account the random error that is inherent in measurement, whereas PCA fails to do so. This point is exemplified by Brown (2009), [33] who indicated that, in respect to the correlation matrices involved in the calculations:

In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables.

Brown (2009), Principal components analysis and exploratory factor analysis – Definitions, differences and choices

For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data.

Differences in procedure and results

The differences between principal components analysis and factor analysis are further illustrated by Suhr (2009): [30]

In marketing

The basic steps are:

Information collection

The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product sample or descriptions of product concepts on a range of attributes. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is coded and input into a statistical program such as R, SPSS, SAS, Stata, STATISTICA, JMP, and SYSTAT.

Analysis

The analysis will isolate the underlying factors that explain the data using a matrix of associations. [34] Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because some attributes may be related to each other. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a factor loading.

Advantages

Disadvantages

In physical and biological sciences

Factor analysis has also been widely used in physical sciences such as geochemistry, hydrochemistry, [35] astrophysics and cosmology, as well as biological sciences, such as ecology, molecular biology, neuroscience and biochemistry.

In groundwater quality management, it is important to relate the spatial distribution of different chemical parameters to different possible sources, which have different chemical signatures. For example, a sulfide mine is likely to be associated with high levels of acidity, dissolved sulfates and transition metals. These signatures can be identified as factors through R-mode factor analysis, and the location of possible sources can be suggested by contouring the factor scores. [36]

In geochemistry, different factors can correspond to different mineral associations, and thus to mineralisation. [37]

In microarray analysis

Factor analysis can be used for summarizing high-density oligonucleotide DNA microarrays data at probe level for Affymetrix GeneChips. In this case, the latent variable corresponds to the RNA concentration in a sample. [38]

Implementation

Factor analysis has been implemented in several statistical analysis programs since the 1980s:

See also

Related Research Articles

A magneto-optic effect is any one of a number of phenomena in which an electromagnetic wave propagates through a medium that has been altered by the presence of a quasistatic magnetic field. In such a medium, which is also called gyrotropic or gyromagnetic, left- and right-rotating elliptical polarizations can propagate at different speeds, leading to a number of important phenomena. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the plane of polarization can be rotated, forming a Faraday rotator. The results of reflection from a magneto-optic material are known as the magneto-optic Kerr effect.

Singular value decomposition Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix to any matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics.

Gauss–Markov theorem theorem

In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists. Here "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimators. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator or ridge regression.

In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.

Covariance matrix measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix, also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix, is a matrix whose element in the i, j position is the covariance between the i-th and j-th elements of a random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution.

Eigenface set of eigenvectors used in the computer vision problem of human face recognition

Eigenfaces is the name given to a set of eigenvectors when they are used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.

Regression analysis set of statistical processes for estimating the relationships among variables

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, often using the method of ordinary least squares, which typically estimates the conditional expectation of the dependent variable when the independent variables are fixed. Less common types of regression estimate different location parameters of the dependent variable given values of the independent variables, for example in quantile regression or Necessary Condition Analysis (NCA). Moreover, variants such as nonparametric regression allow the regression function to lie in a broader set of functions, which may be infinite-dimensional.

In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an n-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, although other shapes can occur.

Ordinary least squares method for estimating the unknown parameters in a linear regression model

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it.

Panel (data) analysis is a statistical method, widely used in social science, epidemiology, and econometrics to analyze two-dimensional panel data. The data are usually collected over time and over the same individuals and then a regression is run over these two dimensions. Multidimensional analysis is an econometric method in which data are collected over more than two dimensions.

Generalized least squares

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1934.

A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1. The transformation is called "whitening" because it changes the input vector into a white noise vector.

Principal component regression statistical technique

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). Typically, it considers regressing the outcome on a set of covariates based on a standard linear regression model, but uses PCA for estimating the unknown regression coefficients in the model.

Deformation (mechanics)

Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body.

In statistics and in machine learning, a linear predictor function is a linear function of a set of coefficients and explanatory variables, whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers, as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".

In statistics, factor analysis of mixed data (FAMD), or factorial analysis of mixed data, is the factorial method devoted to data tables in which a group of individuals is described both by quantitative and qualitative variables. It belongs to the exploratory methods developed by the French school called Analyse des données founded by Jean-Paul Benzécri.

Linear regression statistical approach for modeling the relationship between a scalar dependent variable and one or more explanatory variables

In statistics, linear regression is a linear approach to modeling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

References

  1. Bartholomew, D.J.; Steele, F.; Galbraith, J.; Moustaki, I. (2008). Analysis of Multivariate Social Science Data. Statistics in the Social and Behavioral Sciences Series (2nd ed.). Taylor & Francis. ISBN   978-1584889601.
  2. Jolliffe I.T. Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, NY, 2002, XXIX, 487 p. 28 illus. ISBN   978-0-387-95442-4
  3. Cattell, R. B. (1952). Factor analysis. New York: Harper.
  4. Fruchter, B. (1954). Introduction to Factor Analysis. Van Nostrand.
  5. Cattell, R. B. (1978). Use of Factor Analysis in Behavioral and Life Sciences. New York: Plenum.
  6. Child, D. (2006). The Essentials of Factor Analysis, 3rd edition. Bloomsbury Academic Press.
  7. Gorsuch, R. L. (1983). Factor Analysis, 2nd edition. Hillsdale, NJ: Erlbaum.
  8. McDonald, R. P. (1985). Factor Analysis and Related Methods. Hillsdale, NJ: Erlbaum.
  9. 1 2 3 Harman, Harry H. (1976). Modern Factor Analysis. University of Chicago Press. pp. 175, 176. ISBN   978-0-226-31652-9.
  10. 1 2 3 4 5 6 7 8 9 Polit DF Beck CT (2012). Nursing Research: Generating and Assessing Evidence for Nursing Practice, 9th ed. Philadelphia, USA: Wolters Klower Health, Lippincott Williams & Wilkins.
  11. Meng, J. (2011). "Uncover cooperative gene regulations by microRNAs and transcription factors in glioblastoma using a nonnegative hybrid factor model". International Conference on Acoustics, Speech and Signal Processing. Archived from the original on 2011-11-23.
  12. Liou, C.-Y.; Musicus, B.R. (2008). "Cross Entropy Approximation of Structured Gaussian Covariance Matrices". IEEE Transactions on Signal Processing. 56 (7): 3362–3367. doi:10.1109/TSP.2008.917878.
  13. Dobriban, Edgar (2017-10-02). "Permutation methods for factor analysis and PCA". arXiv: 1710.00479v2 .Cite journal requires |journal= (help)
  14. Tran, U. S., & Formann, A. K. (2009). Performance of parallel analysis in retrieving unidimensionality in the presence of binary data. Educational and Psychological Measurement, 69, 50-61.
  15. 1 2 Velicer, W.F. (1976). "Determining the number of components from the matrix of partial correlations". Psychometrika. 41 (3): 321–327. doi:10.1007/bf02293557.
  16. 1 2 Courtney, M. G. R. (2013). Determining the number of factors to retain in EFA: Using the SPSS R-Menu v2.0 to make more judicious estimations. Practical Assessment, Research and Evaluation, 18(8). Available online: http://pareonline.net/getvn.asp?v=18&n=8
  17. 1 2 Warne, R. T.; Larsen, R. (2014). "Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis". Psychological Test and Assessment Modeling. 56: 104–123.
  18. Ruscio, John; Roche, B. (2012). "Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure". Psychological Assessment. 24 (2): 282–292. doi:10.1037/a0025697. PMID   21966933.
  19. Garrido, L. E., & Abad, F. J., & Ponsoda, V. (2012). A new look at Horn's parallel analysis with ordinal variables. Psychological Methods. Advance online publication. doi : 10.1037/a0030005
  20. Bandalos, D.L.; Boehm-Kaufman, M.R. (2008). "Four common misconceptions in exploratory factor analysis". In Lance, Charles E.; Vandenberg, Robert J. (eds.). Statistical and Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences. Taylor & Francis. pp. 61–87. ISBN   978-0-8058-6237-9.
  21. Larsen, R.; Warne, R. T. (2010). "Estimating confidence intervals for eigenvalues in exploratory factor analysis". Behavior Research Methods. 42 (3): 871–876. doi:10.3758/BRM.42.3.871. PMID   20805609.
  22. Cattell, Raymond (1966). "The scree test for the number of factors". Multivariate Behavioral Research. 1 (2): 245–76. doi:10.1207/s15327906mbr0102_10. PMID   26828106.
  23. Russell, D.W. (December 2002). "In search of underlying dimensions: The use (and abuse) of factor analysis in Personality and Social Psychology Bulletin". Personality and Social Psychology Bulletin. 28 (12): 1629–46. doi:10.1177/014616702237645.
  24. Mckeown, Bruce (2013-06-21). Q Methodology. ISBN   9781452242194. OCLC   841672556.
  25. Stephenson, W. (August 1935). "Technique of Factor Analysis". Nature. 136 (3434): 297. doi:10.1038/136297b0. ISSN   0028-0836.
  26. Sternberg, R. J. (1977). Metaphors of Mind: Conceptions of the Nature of Intelligence. New York: Cambridge University Press. pp. 85–111.[ verification needed ]
  27. "Factor Analysis". Archived from the original on August 18, 2004. Retrieved July 22, 2004.
  28. 1 2 Fabrigar; et al. (1999). "Evaluating the use of exploratory factor analysis in psychological research" (PDF). Psychological Methods.
  29. 1 2 Suhr, Diane (2009). "Principal component analysis vs. exploratory factor analysis" (PDF). SUGI 30 Proceedings. Retrieved 5 April 2012.
  30. SAS Statistics. "Principal Components Analysis" (PDF). SAS Support Textbook.
  31. Meglen, R.R. (1991). "Examining Large Databases: A Chemometric Approach Using Principal Component Analysis". Journal of Chemometrics. 5 (3): 163–179. doi:10.1002/cem.1180050305.
  32. Brown, J. D. (January 2009). "Principal components analysis and exploratory factor analysis – Definitions, differences and choices" (PDF). Shiken: JALT Testing & Evaluation SIG Newsletter. Retrieved 16 April 2012.
  33. Ritter, N. (2012). A comparison of distribution-free and non-distribution free methods in factor analysis. Paper presented at Southwestern Educational Research Association (SERA) Conference 2012, New Orleans, LA (ED529153).
  34. Subbarao, C.; Subbarao, N.V.; Chandu, S.N. (December 1996). "Characterisation of groundwater contamination using factor analysis". Environmental Geology. 28 (4): 175–180. doi:10.1007/s002540050091.
  35. Love, D.; Hallbauer, D.K.; Amos, A.; Hranova, R.K. (2004). "Factor analysis as a tool in groundwater quality management: two southern African case studies". Physics and Chemistry of the Earth. 29 (15–18): 1135–43. doi:10.1016/j.pce.2004.09.027.
  36. Barton, E.S.; Hallbauer, D.K. (1996). "Trace-element and U—Pb isotope compositions of pyrite types in the Proterozoic Black Reef, Transvaal Sequence, South Africa: Implications on genesis and age". Chemical Geology. 133 (1–4): 173–199. doi:10.1016/S0009-2541(96)00075-7.
  37. Hochreiter, Sepp; Clevert, Djork-Arné; Obermayer, Klaus (2006). "A new summarization method for affymetrix probe level data". Bioinformatics. 22 (8): 943–9. doi:10.1093/bioinformatics/btl033. PMID   16473874.
  38. http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FactorAnalysis.html
  39. MacCallum, Robert (June 1983). "A comparison of factor analysis programs in SPSS, BMDP, and SAS". Psychometrika. 48 (2): 223–231. doi:10.1007/BF02294017.

Further reading