Confirmatory factor analysis

Last updated

In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research. [1] It is used to test whether measures of a construct are consistent with a researcher's understanding of the nature of that construct (or factor). As such, the objective of confirmatory factor analysis is to test whether the data fit a hypothesized measurement model. This hypothesized model is based on theory and/or previous analytic research. [2] CFA was first developed by Jöreskog (1969) [3] and has built upon and replaced older methods of analyzing construct validity such as the MTMM Matrix as described in Campbell & Fiske (1959). [4]

Contents

In confirmatory factor analysis, the researcher first develops a hypothesis about what factors they believe are underlying the measures used (e.g., "Depression" being the factor underlying the Beck Depression Inventory and the Hamilton Rating Scale for Depression) and may impose constraints on the model based on these a priori hypotheses. By imposing these constraints, the researcher is forcing the model to be consistent with their theory. For example, if it is posited that there are two factors accounting for the covariance in the measures, and that these factors are unrelated to each other, the researcher can create a model where the correlation between factor A and factor B is constrained to zero. Model fit measures could then be obtained to assess how well the proposed model captured the covariance between all the items or measures in the model. If the constraints the researcher has imposed on the model are inconsistent with the sample data, then the results of statistical tests of model fit will indicate a poor fit, and the model will be rejected. If the fit is poor, it may be due to some items measuring multiple factors. It might also be that some items within a factor are more related to each other than others.

For some applications, the requirement of "zero loadings" (for indicators not supposed to load on a certain factor) has been regarded as too strict. A newly developed analysis method, "exploratory structural equation modeling", specifies hypotheses about the relation between observed indicators and their supposed primary latent factors while allowing for estimation of loadings with other latent factors as well. [5]

Statistical model

In confirmatory factor analysis, researchers are typically interested in studying the degree to which responses on a p x 1 vector of observable random variables can be used to assign a value to one or more unobserved variable(s) η. The investigation is largely accomplished by estimating and evaluating the loading of each item used to tap aspects of the unobserved latent variable. That is, y[i] is the vector of observed responses predicted by the unobserved latent variable , which is defined as:

,

where is the p x 1 vector of observed random variables, are the unobserved latent variables and is a p x k matrix with k equal to the number of latent variables. [6] Since, are imperfect measures of , the model also consists of error, . Estimates in the maximum likelihood (ML) case generated by iteratively minimizing the fit function,

where is the variance-covariance matrix implied by the proposed factor analysis model and is the observed variance-covariance matrix. [6] That is, values are found for free model parameters that minimize the difference between the model-implied variance-covariance matrix and observed variance-covariance matrix.

Alternative estimation strategies

Although numerous algorithms have been used to estimate CFA models, maximum likelihood (ML) remains the primary estimation procedure. [7] That being said, CFA models are often applied to data conditions that deviate from the normal theory requirements for valid ML estimation. For example, social scientists often estimate CFA models with non-normal data and indicators scaled using discrete ordered categories. [8] Accordingly, alternative algorithms have been developed that attend to the diverse data conditions applied researchers encounter. The alternative estimators have been characterized into two general type: (1) robust and (2) limited information estimator. [9]

When ML is implemented with data that deviates away from the assumptions of normal theory, CFA models may produce biased parameter estimates and misleading conclusions. [10] Robust estimation typically attempts to correct the problem by adjusting the normal theory model χ2 and standard errors. [9] For example, Satorra and Bentler (1994) recommended using ML estimation in the usual way and subsequently dividing the model χ2 by a measure of the degree of multivariate kurtosis. [11] An added advantage of robust ML estimators is their availability in common SEM software (e.g., LAVAAN). [12]

Unfortunately, robust ML estimators can become untenable under common data conditions. In particular, when indicators are scaled using few response categories (e.g., disagree, neutral, agree) robust ML estimators tend to perform poorly. [10] Limited information estimators, such as weighted least squares (WLS), are likely a better choice when manifest indicators take on an ordinal form. [13] Broadly, limited information estimators attend to the ordinal indicators by using polychoric correlations to fit CFA models. [14] Polychoric correlations capture the covariance between two latent variables when only their categorized form is observed, which is achieved largely through the estimation of threshold parameters. [15]

Exploratory factor analysis

Both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are employed to understand shared variance of measured variables that is believed to be attributable to a factor or latent construct. Despite this similarity, however, EFA and CFA are conceptually and statistically distinct analyses.

The goal of EFA is to identify factors based on data and to maximize the amount of variance explained. [16] The researcher is not required to have any specific hypotheses about how many factors will emerge, and what items or variables these factors will comprise. If these hypotheses exist, they are not incorporated into and do not affect the results of the statistical analyses. By contrast, CFA evaluates a priori hypotheses and is largely driven by theory. CFA analyses require the researcher to hypothesize, in advance, the number of factors, whether or not these factors are correlated, and which items/measures load onto and reflect which factors. [17] As such, in contrast to exploratory factor analysis, where all loadings are free to vary, CFA allows for the explicit constraint of certain loadings to be zero.

EFA is often considered to be more appropriate than CFA in the early stages of scale development because CFA does not show how well your items load on the non-hypothesized factors. [18] Another strong argument for the initial use of EFA, is that the misspecification of the number of factors at an early stage of scale development will typically not be detected by confirmatory factor analysis. At later stages of scale development, confirmatory techniques may provide more information by the explicit contrast of competing factor structures. [18]

EFA is sometimes reported in research when CFA would be a better statistical approach. [19] It has been argued that CFA can be restrictive and inappropriate when used in an exploratory fashion. [20] However, the idea that CFA is solely a “confirmatory” analysis may sometimes be misleading, as modification indices used in CFA are somewhat exploratory in nature. Modification indices show the improvement in model fit if a particular coefficient were to become unconstrained. [21] Likewise, EFA and CFA do not have to be mutually exclusive analyses; EFA has been argued to be a reasonable follow up to a poor-fitting CFA model. [22]

Structural equation modeling

Structural equation modeling software is typically used for performing confirmatory factor analysis. LISREL, [23] EQS, [24] AMOS, [25] Mplus [26] and LAVAAN package in R [27] are popular software programs. There is also the Python package semopy 2. [28] CFA is also frequently used as a first step to assess the proposed measurement model in a structural equation model. Many of the rules of interpretation regarding assessment of model fit and model modification in structural equation modeling apply equally to CFA. CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to be causal in nature. In the context of SEM, the CFA is often called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'.

Evaluating model fit

In CFA, several statistical tests are used to determine how well the model fits to the data. [16] Note that a good fit between the model and the data does not mean that the model is “correct”, or even that it explains a large proportion of the covariance. A “good model fit” only indicates that the model is plausible. [29] When reporting the results of a confirmatory factor analysis, one is urged to report: a) the proposed models, b) any modifications made, c) which measures identify each latent variable, d) correlations between latent variables, e) any other pertinent information, such as whether constraints are used. [30] With regard to selecting model fit statistics to report, one should not simply report the statistics that estimate the best fit, though this may be tempting. Though several varying opinions exist, Kline (2010) recommends reporting the chi-squared test, the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardised root mean square residual (SRMR). [1]

Absolute fit indices

Absolute fit indices determine how well the a priori model fits, or reproduces the data. [31] Absolute fit indices include, but are not limited to, the Chi-Squared test, RMSEA, GFI, AGFI, RMR, and SRMR. [32]

Chi-squared test

The chi-squared test indicates the difference between observed and expected covariance matrices. Values closer to zero indicate a better fit; smaller difference between expected and observed covariance matrices. [21] Chi-squared statistics can also be used to directly compare the fit of nested models to the data. One difficulty with the chi-squared test of model fit, however, is that researchers may fail to reject an inappropriate model in small sample sizes and reject an appropriate model in large sample sizes. [21] As a result, other measures of fit have been developed.

Root mean square error of approximation

The root mean square error of approximation (RMSEA) avoids issues of sample size by analyzing the discrepancy between the hypothesized model, with optimally chosen parameter estimates, and the population covariance matrix. [32] The RMSEA ranges from 0 to 1, with smaller values indicating better model fit. A value of .06 or less is indicative of acceptable model fit. [33] [34]

Root mean square residual and standardized root mean square residual

The root mean square residual (RMR) and standardized root mean square residual (SRMR) are the square root of the discrepancy between the sample covariance matrix and the model covariance matrix. [32] The RMR may be somewhat difficult to interpret, however, as its range is based on the scales of the indicators in the model (this becomes tricky when you have multiple indicators with varying scales; e.g., two questionnaires, one on a 0–10 scale, the other on a 1–3 scale). [1] The standardized root mean square residual removes this difficulty in interpretation, and ranges from 0 to 1, with a value of .08 or less being indicative of an acceptable model. [33]

Goodness of fit index and adjusted goodness of fit index

The goodness of fit index (GFI) is a measure of fit between the hypothesized model and the observed covariance matrix. The adjusted goodness of fit index (AGFI) corrects the GFI, which is affected by the number of indicators of each latent variable. The GFI and AGFI range between 0 and 1, with a value of over .9 generally indicating acceptable model fit. [35]

Relative fit indices

Relative fit indices (also called “incremental fit indices” [36] and “comparative fit indices” [37] ) compare the chi-square for the hypothesized model to one from a “null”, or “baseline” model. [31] This null model almost always contains a model in which all of the variables are uncorrelated, and as a result, has a very large chi-square (indicating poor fit). [32] Relative fit indices include the normed fit index and comparative fit index.

Normed fit index and non-normed fit index

The normed fit index (NFI) analyzes the discrepancy between the chi-squared value of the hypothesized model and the chi-squared value of the null model. [38] However, NFI tends to be negatively biased. [37] The non-normed fit index (NNFI; also known as the Tucker–Lewis index, as it was built on an index formed by Tucker and Lewis, in 1973 [39] ) resolves some of the issues of negative bias, though NNFI values may sometimes fall beyond the 0 to 1 range. [37] Values for both the NFI and NNFI should range between 0 and 1, with a cutoff of .95 or greater indicating a good model fit. [40]

Comparative fit index

The comparative fit index (CFI) analyzes the model fit by examining the discrepancy between the data and the hypothesized model, while adjusting for the issues of sample size inherent in the chi-squared test of model fit, [21] and the normed fit index. [37] CFI values range from 0 to 1, with larger values indicating better fit. Previously, a CFI value of .90 or larger was considered to indicate acceptable model fit. [40] However, recent studies[ when? ] have indicated that a value greater than .90 is needed to ensure that misspecified models are not deemed acceptable. [40] Thus, a CFI value of .95 or higher is presently accepted as an indicator of good fit.

Identification and underidentification

To estimate the parameters of a model, the model must be properly identified. That is, the number of estimated (unknown) parameters (q) must be less than or equal to the number of unique variances and covariances among the measured variables; p(p + 1)/2. This equation is known as the "t rule". If there is too little information available on which to base the parameter estimates, then the model is said to be underidentified, and model parameters cannot be estimated appropriately. [41]

See also

Related Research Articles

Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.

<span class="mw-page-title-main">Principal component analysis</span> Method of data analysis

Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.

Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Take the typical supply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price.

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors plus "error" terms, hence factor analysis can be thought of as a special case of errors-in-variables models.

<span class="mw-page-title-main">Structural equation modeling</span> Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

In psychology, discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated.

Karl Gustav Jöreskog is a Swedish statistician. Jöreskog is a professor emeritus at Uppsala University, and a co-author of the LISREL statistical program. He is also a member of the Royal Swedish Academy of Sciences. Jöreskog received his bachelor's, master's, and doctoral degrees at Uppsala University. He is also a former student of Herman Wold. He was a statistician at Educational Testing Service (ETS) and a visiting professor at Princeton University.

<span class="mw-page-title-main">Exploratory factor analysis</span> Statistical method in psychology

In multivariate statistics, exploratory factor analysis (EFA) is a statistical method used to uncover the underlying structure of a relatively large set of variables. EFA is a technique within factor analysis whose overarching goal is to identify the underlying relationships between measured variables. It is commonly used by researchers when developing a scale and serves to identify a set of latent constructs underlying a battery of measured variables. It should be used when the researcher has no a priori hypothesis about factors or patterns of measured variables. Measured variables are any one of several attributes of people that may be observed and measured. Examples of measured variables could be the physical height, weight, and pulse rate of a human being. Usually, researchers would have a large number of measured variables, which are assumed to be related to a smaller number of "unobserved" factors. Researchers must carefully consider the number of measured variables to include in the analysis. EFA procedures are more accurate when each factor is represented by multiple measured variables in the analysis.

<span class="mw-page-title-main">OpenMx</span>

OpenMx is an open source program for extended structural equation modeling. It runs as a package under R. Cross platform, it runs under Linux, Mac OS and Windows.

Log-linear analysis is a technique used in statistics to examine the relationship between more than two categorical variables. The technique is used for both hypothesis testing and model building. In both these uses, models are tested to find the most parsimonious model that best accounts for the variance in the observed frequencies.

Measurement invariance or measurement equivalence is a statistical property of measurement that indicates that the same construct is being measured across some specified groups. For example, measurement invariance can be used to study whether a given measure is interpreted in a conceptually similar manner by respondents representing different genders or cultural backgrounds. Violations of measurement invariance may preclude meaningful interpretation of measurement data. Tests of measurement invariance are increasingly used in fields such as psychology to supplement evaluation of measurement quality rooted in classical test theory.

The partial least squares path modeling or partial least squares structural equation modeling is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent variables.

<span class="mw-page-title-main">SmartPLS</span> Software

SmartPLS is a software with graphical user interface for variance-based structural equation modeling (SEM) using the partial least squares (PLS) path modeling method. Users can estimate models with their data by using basic PLS-SEM, weighted PLS-SEM (WPLS), consistent PLS-SEM (PLSc-SEM), and sumscores regression algorithms. The software computes standard results assessment criteria and it supports additional statistical analyses . Since SmartPLS is programmed in Java, it can be executed and run on different computer operating systems such as Windows and Mac.

Jeffrey Scott Tanaka was an American psychologist and statistician, known for his work in educational psychology, social psychology and various fields of statistics including structural equation modeling.

In statistical models applied to psychometrics, congeneric reliability a single-administration test score reliability coefficient, commonly referred to as composite reliability, construct reliability, and coefficient omega. is a structural equation model (SEM)-based reliability coefficients and is obtained from on a unidimensional model. is the second most commonly used reliability factor after tau-equivalent reliability(; also known as Cronbach's alpha), and is often recommended as its alternative.

<span class="mw-page-title-main">Average variance extracted</span>

In statistics (classical test theory), average variance extracted (AVE) is a measure of the amount of variance that is captured by a construct in relation to the amount of variance due to measurement error.

In statistics, confirmatory composite analysis (CCA) is a sub-type of structural equation modeling (SEM). Although, historically, CCA emerged from a re-orientation and re-start of partial least squares path modeling (PLS-PM), it has become an independent approach and the two should not be confused. In many ways it is similar to, but also quite distinct from confirmatory factor analysis (CFA). It shares with CFA the process of model specification, model identification, model estimation, and model assessment. However, in contrast to CFA which always assumes the existence of latent variables, in CCA all variables can be observable, with their interrelationships expressed in terms of composites, i.e., linear compounds of subsets of the variables. The composites are treated as the fundamental objects and path diagrams can be used to illustrate their relationships. This makes CCA particularly useful for disciplines examining theoretical concepts that are designed to attain certain goals, so-called artifacts, and their interplay with theoretical concepts of behavioral sciences.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

References

  1. 1 2 3 Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.). New York, New York: Guilford Press.
  2. Preedy, V. R., & Watson, R. R. (2009) Handbook of Disease Burdens and Quality of Life Measures. New York: Springer.
  3. Jöreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34(2), 183-202.
  4. Campbell, D. T. & Fisk, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.
  5. Asparouhov, T. & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 397-438
  6. 1 2 Yang-Wallentin, Fan; Jöreskog, Karl G.; Luo, Hao (2010-07-13). "Confirmatory Factor Analysis of Ordinal Variables With Misspecified Models". Structural Equation Modeling. 17 (3): 392–423. doi:10.1080/10705511.2010.489003. ISSN   1070-5511. S2CID   122941470.
  7. Flora, David B.; Curran, Patrick J. (2004). "An Empirical Evaluation of Alternative Methods of Estimation for Confirmatory Factor Analysis With Ordinal Data". Psychological Methods. 9 (4): 466–491. doi:10.1037/1082-989x.9.4.466. PMC   3153362 . PMID   15598100.
  8. Millsap, Roger E.; Yun-Tein, Jenn (2004-07-01). "Assessing Factorial Invariance in Ordered-Categorical Measures". Multivariate Behavioral Research. 39 (3): 479–515. doi: 10.1207/S15327906MBR3903_4 . ISSN   0027-3171.
  9. 1 2 Bandalos, Deborah L. (2014-01-02). "Relative Performance of Categorical Diagonally Weighted Least Squares and Robust Maximum Likelihood Estimation". Structural Equation Modeling. 21 (1): 102–116. doi:10.1080/10705511.2014.859510. ISSN   1070-5511. S2CID   123259681.
  10. 1 2 Li, Cheng-Hsien (2015-07-15). "Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares". Behavior Research Methods. 48 (3): 936–949. doi: 10.3758/s13428-015-0619-7 . ISSN   1554-3528. PMID   26174714.
  11. Bryant, Fred B.; Satorra, Albert (2012-07-20). "Principles and Practice of Scaled Difference Chi-Square Testing". Structural Equation Modeling. 19 (3): 372–398. doi:10.1080/10705511.2012.687671. hdl: 10230/46110 . ISSN   1070-5511. S2CID   53601390.
  12. Rosseel, Yves (2012). "lavaan: An R Package for Structural Equation Modeling | Rosseel | Journal of Statistical Software". Journal of Statistical Software. 48 (2). doi: 10.18637/jss.v048.i02 .
  13. Rhemtulla, Mijke; Brosseau-Liard, Patricia É.; Savalei, Victoria (2012). "When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions". Psychological Methods. 17 (3): 354–373. doi:10.1037/a0029315. PMID   22799625.
  14. Yang-Wallentin, Fan; Jöreskog, Karl G.; Luo, Hao (2010-07-13). "Confirmatory Factor Analysis of Ordinal Variables With Misspecified Models". Structural Equation Modeling. 17 (3): 392–423. doi:10.1080/10705511.2010.489003. ISSN   1070-5511. S2CID   122941470.
  15. Olsson, Ulf (1979). "Maximum likelihood estimation of the polychoric correlation coefficient". Psychometrika. 44 (4): 443–460. doi:10.1007/BF02296207. ISSN   0033-3123. S2CID   119716465.
  16. 1 2 Suhr, D. D. (2006). "Exploratory or confirmatory factor analysis?" (PDF). Statistics and Data Analysis. 31. Retrieved April 20, 2012.
  17. Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. Washington, DC, US: American Psychological Association.
  18. 1 2 Kelloway, E. K. (1995). "Structural equation modelling in perspective". Journal of Organizational Behavior. 16 (3): 215–224. doi:10.1002/job.4030160304.
  19. Levine, T. R. (2005). "Confirmatory factor analysis and scale validation in communication research". Communication Research Reports. 22 (4): 335–338. doi:10.1080/00036810500317730. S2CID   145125871.
  20. Browne, M. W. (2001). "An overview of analytic rotation in exploratory factor analysis". Multivariate Behavioral Research. 36 (1): 111–150. doi:10.1207/S15327906MBR3601_05. S2CID   9598774.
  21. 1 2 3 4 Gatignon, H. (2010). "Confirmatory Factor Analysis". Statistical Analysis of Management Data. Springer. pp. 59–122. doi:10.1007/978-1-4419-1270-1_4. ISBN   978-1-4419-1269-5.
  22. Schmitt, T. A. (2011). "Current methodological considerations in exploratory and confirmatory factor analysis". Journal of Psychoeducational Assessment. 29 (4): 304–321. doi:10.1177/0734282911406653. S2CID   4490758.
  23. CFA with LISREL Archived 2009-05-28 at the Wayback Machine
  24. Byrne, B. M. (2006). Structural equation modeling with EQS: Basic concepts, application, and programming. New Jersey: Lawrence Elbaum Associates.
  25. CFA using AMOS
  26. Mplus homepage
  27. "The lavaan Project".
  28. Meshcheryakov, Georgy; Igolkina, Anna A.; Samsonova, Maria G. (2021-06-09). "semopy 2: A Structural Equation Modeling Package with Random Effects in Python". arXiv: 2106.01140 [stat.AP].
  29. Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures, Methods of Psychological Research Online, 8(2), 23-74
  30. Jackson, D. L., Gillaspy, J. A., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory factor analysis: An overview and some recommendations. Psychological Methods, 14(1), 6-23.
  31. 1 2 McDonald, R. P., & Ho, M. H. R. (2002). Principles and practice in reporting statistical equation analyses. Psychological Methods, 7(1), 64-82
  32. 1 2 3 4 Hooper, D., Coughlan, J., & Mullen, M.R. (2008). Structural equation modelling: Guidelines for determining model fit. Journal of Business Research Methods, 6, 53–60
  33. 1 2 Hu, Li‐tze; Bentler, Peter M. (1999). "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives". Structural Equation Modeling. 6 (1): 1–55. doi:10.1080/10705519909540118. hdl: 2027.42/139911 . ISSN   1070-5511.
  34. Brown, Timothy (2015). Confirmatory factor analysis for applied research. New York London: The Guilford Press. p. 72. ISBN   978-1-4625-1779-4.
  35. Baumgartner, H., & Hombur, C. (1996). Applications of structural equation modeling in marketing and consumer research: A review. International Journal of Research in Marketing, 13, 139-161.
  36. Tanaka, J. S. (1993). "Multifaceted conceptions of fit in structure equation models". In Bollen, K. A.; Long, J. S. (eds.). Testing structural equation models. Newbury Park, CA: Sage. pp. 136–162. ISBN   0-8039-4506-X.
  37. 1 2 3 4 Bentler, P. M. (1990). "Comparative fit indexes in structural models". Psychological Bulletin. 107 (2): 238–46. doi:10.1037/0033-2909.107.2.238. PMID   2320703.
  38. Bentler, P. M.; Bonett, D. G. (1980). "Significance tests and goodness of fit in the analysis of covariance structures". Psychological Bulletin. 88 (3): 588–606. doi:10.1037/0033-2909.88.3.588.
  39. Tucker, L. R.; Lewis, C. (1973). "A reliability coefficient for maximum likelihood factor analysis". Psychometrika. 38: 1–10. doi:10.1007/BF02291170. S2CID   50680436.
  40. 1 2 3 Hu, L.; Bentler, P. M. (1999). "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives". Structural Equation Modeling. 6 (1): 1–55. doi:10.1080/10705519909540118.
  41. Babyak, M. A.; Green, S. B. (2010). "Confirmatory factor analysis: An introduction for psychosomatic medicine researchers". Psychosomatic Medicine. 72 (6): 587–597. doi: 10.1097/PSY.0b013e3181de3f8a . PMID   20467001. S2CID   23528566.

Further reading