Original author(s) | Ned Kock |
---|---|
Initial release | 2009 |
Stable release | |
Operating system | Windows |
Platform | MATLAB |
Available in | English |
Type | Statistical analysis, data collection, structural equation modeling, multivariate analysis |
License | Proprietary software |
Website | www |
WarpPLS is a software with graphical user interface for variance-based and factor-based structural equation modeling (SEM) using the partial least squares and factor-based methods. [1] [2] The software can be used in empirical research to analyse collected data (e.g., from questionnaire surveys) and test hypothesized relationships. Since it runs on the MATLAB Compiler Runtime, it does not require the MATLAB software development application to be installed; and can be installed and used on various operating systems in addition to Windows, with virtual installations.
Among the main features of WarpPLS is its ability to identify and model non-linearity among variables in path models, whether these variables are measured as latent variables or not, yielding parameters that take the corresponding underlying heterogeneity into consideration. [3] [4] [5] [6] [7]
Other notable features are summarized: [8] [9] [10] [11]
Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable, i.e., multivariate random variables. Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied.
Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.
In statistics, path analysis is used to describe the directed dependencies among a set of variables. This includes models equivalent to any form of multiple regression analysis, factor analysis, canonical correlation analysis, discriminant analysis, as well as more general families of models in the multivariate analysis of variance and covariance analyses.
Partial least squares regression is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the Y is categorical.
In statistics, multicollinearity is a phenomenon in which one predictor variable in a multiple regression model can be perfectly predicted from the others. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the data or the procedure used to fit the model.
Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.
Path coefficients are standardized versions of linear regression weights which can be used in examining the possible causal linkage between statistical variables in the structural equation modeling approach. The standardization involves multiplying the ordinary regression coefficient by the standard deviations of the corresponding explanatory variable: these can then be compared to assess the relative effects of the variables within the fitted regression model. The idea of standardization can be extended to apply to partial regression coefficients.
In statistics, latent variables are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management, psychology and the social sciences.
Latent growth modeling is a statistical technique used in the structural equation modeling (SEM) framework to estimate growth trajectories. It is a longitudinal analysis technique to estimate growth over a period of time. It is widely used in the field of psychology, behavioral science, education and social science. It is also called latent growth curve analysis. The latent growth model was derived from theories of SEM. General purpose SEM software, such as OpenMx, lavaan, AMOS, Mplus, LISREL, or EQS among others may be used to estimate growth trajectories.
In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research. It is used to test whether measures of a construct are consistent with a researcher's understanding of the nature of that construct. As such, the objective of confirmatory factor analysis is to test whether the data fit a hypothesized measurement model. This hypothesized model is based on theory and/or previous analytic research. CFA was first developed by Jöreskog (1969) and has built upon and replaced older methods of analyzing construct validity such as the MTMM Matrix as described in Campbell & Fiske (1959).
Nereu Florencio "Ned" Kock is a Brazilian-American philosopher. He is a Texas A&M Regents Professor of Information Systems at Texas A&M International University.
In applied statistics,, common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to the constructs the measures are assumed to represent" or equivalently as "systematic error variance shared among variables measured with and introduced as a function of the same method and/or source". For example, an electronic survey method might influence results for those who might be unfamiliar with an electronic survey interface differently than for those who might be familiar. If measures are affected by CMV or common-method bias, the intercorrelations among them can be inflated or deflated depending upon several factors. Although it is sometimes assumed that CMV affects all variables, evidence suggests that whether or not the correlation between two variables is affected by CMV is a function of both the method and the particular constructs being measured.
The partial least squares path modeling or partial least squares structural equation modeling is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent variables.
SmartPLS is a software with graphical user interface for variance-based structural equation modeling (SEM) using the partial least squares (PLS) path modeling method. Users can estimate models with their data by using basic PLS-SEM, weighted PLS-SEM (WPLS), consistent PLS-SEM (PLSc-SEM), and sumscores regression algorithms. The software computes standard results assessment criteria and it supports additional statistical analyses . Since SmartPLS is programmed in Java, it can be executed and run on different computer operating systems such as Windows and Mac.
In statistics (classical test theory), average variance extracted (AVE) is a measure of the amount of variance that is captured by a construct in relation to the amount of variance due to measurement error.
In statistics, confirmatory composite analysis (CCA) is a sub-type of structural equation modeling (SEM). Although, historically, CCA emerged from a re-orientation and re-start of partial least squares path modeling (PLS-PM), it has become an independent approach and the two should not be confused. In many ways it is similar to, but also quite distinct from confirmatory factor analysis (CFA). It shares with CFA the process of model specification, model identification, model estimation, and model assessment. However, in contrast to CFA which always assumes the existence of latent variables, in CCA all variables can be observable, with their interrelationships expressed in terms of composites, i.e., linear compounds of subsets of the variables. The composites are treated as the fundamental objects and path diagrams can be used to illustrate their relationships. This makes CCA particularly useful for disciplines examining theoretical concepts that are designed to attain certain goals, so-called artifacts, and their interplay with theoretical concepts of behavioral sciences.
Marko Sarstedt is a German academic and a marketing researcher. He is a Full Professor at the Ludwig Maximilian University of Munich and Adjunct Research Professor at Babeș-Bolyai-University.
Necessary Condition Analysis (NCA) is a research approach and tool employed to discern "necessary conditions" within datasets. These indispensable conditions stand as pivotal determinants of particular outcomes, wherein the absence of such conditions ensures the absence of the intended result. Illustratively, the admission of a student into a Ph.D. program necessitates an adequate GMAT score; the progression of AIDS mandates the presence of HIV; and the realization of organizational change will not occur without the commitment of management. Singular in nature, these conditions possess the potential to function as bottlenecks for the desired outcome. Their absence unequivocally guarantees the failure of the intended objective, a deficiency that cannot be offset by the influence of other contributing factors. It is noteworthy, however, that the mere presence of the necessary condition does not ensure the assured attainment of success. In such instances, the condition demonstrates its necessity but lacks sufficiency. To obviate the risk of failure, the simultaneous satisfaction of each distinct necessary condition is imperative. NCA serves as a systematic mechanism, furnishing the rationale and methodological apparatus requisite for the identification and assessment of necessary conditions within extant or novel datasets. It is a powerful method for investigating causal relationships and determining the minimum requirements that must be present for an outcome to be achieved.