Willem Saris

Last updated

Willem Egbert (Wim) Saris (born 8 July 1943) is a Dutch sociologist and Emeritus Professor of Statistics and Methodology, especially known for his work on "Causal modelling in non-experimental research" [1] and measurement errors (for example, MTMM analyses and development of the Survey Quality Predictor (SQP) program). [2]

Contents

Biography

Saris was born in Leiden, South Holland, Netherlands, in 1943. He finished his study of Sociology at the Utrecht University in 1968 and earned his PhD from the University of Amsterdam in 1979. He became full professor in political science, especially the methodology of the social sciences in 1983. Till 2001, he was working at the University of Amsterdam. In 1984, he created the Sociometric Research Foundation in order to improve social science research by the application of statistics. In 1998, he became member of the methodology group that facilitated the creation of the European Social Survey (ESS). As a consequence, he was also member of the Central Coordinating Team of the ESS from 2000 until 2012.

In 2001, he moved to Barcelona where he was granted a position as ICREA professor at ESADE. From 2009 to 2012, he was also director of the Research and Expertise Centre for Survey Methodology (RECSM) at the Pompeu Fabra University. He was also one of the founders and first chairman of the European Survey Research Association (ESRA).[ citation needed ]

Academic career

Studies

Political decision making

Over a long period, Saris was involved with Irmtraud Gallhofer in an applied research project studying political decision making on the basis of governmental meeting minutes and notes of advisers. After developing reliable instruments for analysis, the team applied their approach on varied decisions of the Dutch government as well as major decisions in world history, such as decisions concerning the start of the First and Second World War, the Cuban Missile Crisis and the use of the atomic bombs in 1945. The first phase of the study was directed on the argumentation of individual decision makers. In a later phase, the research was extended to the study of collective decision making by the same governmental groups. The result was that the decision makers were making relatively simple arguments with respect to serious and far-reaching outcomes of war and peace.[ citation needed ]

Statistical aspects of structural equation models (SEM)

Worried about the testing procedures of structural equation modelling, Saris worked together with Albert Satorra to improve these procedures. They together developed different procedures to evaluate structural equation models. The final product was a procedure for detecting misspecification in these models taking into account the power of the tests. A program (JRule) was developed for these tests by William van der Veld.[ citation needed ]

Improvement of measurement in survey research

Application of SEM showed how large the errors were in survey data. Therefore, he directed his research on procedures to improve the measurement instrument such as the application of continuous response scales instead of category scales. In doing so, he detected that people could provide responses in very precise but different ways. This variation in response functions would be perceived as measurement error if it was not detected. In order to prevent this problem he suggested to use two fixed reference points on a scales points where there could be no doubt what they meant. Also studies were undertaken to improve measurement by use of computer assisted data collection, especially in order to remove the effect of the interviewer in the data collection. The latter research led to the development in 1986 of the Telepanel a procedure comparable with the Web survey procedure but at a time before the web existed. Given the well known fact that most people do not know much about many problems discussed and inspired by the decision theoretical approach discussed in a previous project a research line was developed to develop a decision aid for people to participate in decision making with respect to complex issues. In this context a decision aid, the Choice questionnaire, was developed by him in cooperation with Peter Neijens and Jan de Ridder. In this procedure the respondents had to evaluate necessary information about the issue before they were asked to make their choice. Because of the lack of validity of the proposals for measurement instruments specified even by experts in the different fields, he created together with Irmtraud Gallhofer the three steps procedure for designing survey questions. The suggestion is that use of this procedure guarantees the validity of the proposed questions.[ citation needed ]

Evaluation of survey measurement instruments with respect to reliability and validity

When he realized that one could indeed improve survey measures but one could never reduce the errors to zero, he decided to change his research in the direction of estimating the size of the systematic and random errors in the measures in order to be able to correct for measurement error. Inspired by the work of Frank Andrews (1984) a research program has been developed in cooperation with 11 different research groups in Europe from different countries. First the approach to evaluate measurement instruments, the so-called Multitrait-Multimethod (MTMM) experiment has been evaluated and a new model for MTMM data was developed in cooperation with Frank Andrews. This so-called True Score model allows for the separation of random and systematic errors. Also a new design, The Split-ballot MTMM design, for MTMM experiments was developed in order to reduce the number of repetitions of the same questions in cooperation with Albert Satorra and Germa Coenders. Next, data have been collected with this design within the telepanel and the ESS and the data were analysed by members of his research group in RECSM to determine the reliability (complement of random errors) and validity (complement of systematic errors) of measures.[ citation needed ]

Prediction of the quality of surveys

Probably the most important development in his research took place when he realized that one could never study the reliability and validity of all questions. This would be too expensive and time consuming. The alternative he suggested was to develop a coding system for characteristics of the questions and do a meta analysis trying to predict, on the basis of the coded characteristics of the questions, the reliability and validity of these questions. If these predictions were good enough the prediction procedure could also be used for other questions that were not studied so far but of which the same characteristics were coded. It turned out that the prediction of the quality of question, determined in the MTMM experiments, on the basis of the characteristics of the questions, was quite good. This led him to the idea to design a computer assisted expert system which uses all available information of data quality to predict the quality of new questions. This program, called Survey Quality Predictor or SQP was first developed by him in MS-DOS and later transformed in a Windows version. At present there is a new version (SQP 2.0) made by Daniel Oberski based on 3700 questions evaluated in MTMM experiments (SQP). Users can add their own questions in the data base and receive evaluations of the quality of the questions with respect to reliability and validity. Users get also information about possible improvements of the questions. The importance of this program is that the evaluation is not limited to the questions that have been evaluated but can be used for any questions for which the characteristics are coded in the system. All these questions and quality predictions are available to all users free of charge. In this way the knowledge base of questions and quality predictions is growing day by day. Momentarily the program contains already 67.000 questions introduced by more than 2000 unique users of the program.[ citation needed ]

Awards

As a member of the CCT of the European Social Survey he became laureate of the Descartes Prize 2005, for the best collaborative research. In 2009, he received the Helen Dinerman award by the World Association for Public Opinion Research (WAPOR) in recognition for his lifelong contributions to the methodology of public opinion research. In 2011, he received a doctor honoris cause of the University of Debrecen (Hungary). In 2013, he received the important service to survey research prize of the ESRA. In 2014, he was awarded together with Daniel Oberski by the American Association for Public Opinion Research with the Warren J. Mitofsky Innovators Award for the Survey Quality Predictor (SQP 2.0) and his contribution to the improving questionnaire design.[ citation needed ]

Notable contributions

Most relevant publications

Saris has authored and co-authored numerous publications. [3] [4] Some of his main relevant publications are listed below, by main topics.

Books
Political decision making
Structural Equation models
Improvement of measurement
Multi trait Multimethod research
The development of a procedure to predict the quality of survey questions

Related Research Articles

Observational error is the difference between a measured value of a quantity and its true value. In statistics, an error is not necessarily a "mistake". Variability is an inherent part of the results of measurements and of the measurement process.

<span class="mw-page-title-main">Psychological statistics</span>

Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.

<span class="mw-page-title-main">Psychometrics</span> Theory and technique of psychological measurement

Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally refers to specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.

Concept testing is the process of using surveys to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market. It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising.

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

<span class="mw-page-title-main">Likert scale</span> Psychometric measurement scale

A Likert scale is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.

Construct validity concerns how well a set of indicators represent or reflect a concept that is not directly measurable. Construct validation is the accumulation of evidence to support the interpretation of what a measure reflects. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence such as content validity and criterion validity.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Structural equation modeling</span> Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a label for a diverse set of methods used by scientists in both experimental and observational research across the sciences, business, and other fields. It is used most in the social and behavioral sciences. A definition of SEM is difficult without reference to highly technical language, but a good starting place is the name itself.

SERVQUAL is a multi-dimensional research instrument designed to capture consumer expectations and perceptions of a service along five dimensions that are believed to represent service quality. SERVQUAL is built on the expectancy-disconfirmation paradigm, which, in simple terms, means that service quality is understood as the extent to which consumers' pre-consumption expectations of quality are confirmed or disconfirmed by their actual perceptions of the service experience. When the SERVQUAL questionnaire was first published in 1985 by a team of academic researchers, A. Parasuraman, Valarie Zeithaml and Leonard L. Berry to measure quality in the service sector, it represented a breakthrough in the measurement methods used for service quality research. The diagnostic value of the instrument is supported by the model of service quality which forms the conceptual framework for the development of the scale. The instrument has been widely applied in a variety of contexts and cultural settings and found to be relatively robust. It has become the dominant measurement scale in the area of service quality. In spite of the long-standing interest in SERVQUAL and its myriad of context-specific applications, it has attracted some criticism from researchers.

In psychology, discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated.

In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research. It is used to test whether measures of a construct are consistent with a researcher's understanding of the nature of that construct. As such, the objective of confirmatory factor analysis is to test whether the data fit a hypothesized measurement model. This hypothesized model is based on theory and/or previous analytic research. CFA was first developed by Jöreskog (1969) and has built upon and replaced older methods of analyzing construct validity such as the MTMM Matrix as described in Campbell & Fiske (1959).

<span class="mw-page-title-main">Multitrait-multimethod matrix</span> Statistical technique used to examine construct validity

The multitrait-multimethod (MTMM) matrix is an approach to examining construct validity developed by Campbell and Fiske (1959). It organizes convergent and discriminant validity evidence for comparison of how a measure relates to other measures. The conceptual approach has influenced experimental design and measurement theory in psychology, including applications in structural equation models.

In survey sampling, total survey error includes all forms of survey error including sampling variability, interviewer effects, frame errors, response bias, and non-response bias. Total survey error is discussed in detail in many sources including Salant and Dillman.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys.

Measurement invariance or measurement equivalence is a statistical property of measurement that indicates that the same construct is being measured across some specified groups. For example, measurement invariance can be used to study whether a given measure is interpreted in a conceptually similar manner by respondents representing different genders or cultural backgrounds. Violations of measurement invariance may preclude meaningful interpretation of measurement data. Tests of measurement invariance are increasingly used in fields such as psychology to supplement evaluation of measurement quality rooted in classical test theory.

The Quality of Life In Depression Scale (QLDS), originally proposed by Sonja Hunt and Stephen McKenna, is a disease specific patient-reported outcome which assesses the impact that depression has on a patient's quality of life. It is the most commonly used measure of quality of life in clinical trials and studies of depression. The QLDS was developed as a measure to be used in future clinical trials of anti-depressant therapy.

<i>Structural Equations with Latent Variables</i>

Structural Equations with Latent Variables is a statistics textbook by Kenneth Bollen which contains ideas and methods in the field of structural equation modeling.

<span class="mw-page-title-main">Werner W. Wittmann</span>

Werner W. Wittmann is a German psychologist, evaluation researcher and research methodologist.

References

  1. Bagozzi, Richard P.; Yi, Youjae (1988). "On the evaluation of structural equation models". Journal of the Academy of Marketing Science. 16 (1): 74–94. doi:10.1177/009207038801600107. S2CID   122653824.
  2. Saris, W. E. and Gallhofer, I. N. (2014). Design, evaluation and analysis of questionnaires for survey research. Second Edition. Hoboken, Wiley.
  3. List of Publications at National Library of the Netherlands
  4. Willem E. Saris : List of publications 1971-2007 at saris.sqp.nl