Nomological network

Last updated

A nomological network (or nomological net [1] ) is a representation of the concepts (constructs) of interest in a study, their observable manifestations, and the interrelationships between these. The term "nomological" derives from the Greek, meaning "lawful", or in philosophy of science terms, "law-like". It was Cronbach and Meehl's view of construct validity that in order to provide evidence that a measure has construct validity, a nomological network must be developed for its measure. [2]

Contents

The necessary elements of a nomological network are:

Validity evidence based on nomological validity is a general form of construct validity. It is the degree to which a construct behaves as it should within a system of related constructs (the nomological network). [3]

Nomological networks are used in theory development and use a modernist [ clarification needed ] approach. [4]

See also

Related Research Articles

<span class="mw-page-title-main">Psychometrics</span> Theory and technique of psychological measurement

Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally refers to specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.

Validity is the main extent to which a concept, conclusion or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence described in greater detail below.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

<span class="mw-page-title-main">Experimental psychology</span> Application of experimental method to psychological research

Experimental psychology refers to work done by those who apply experimental methods to psychological study and the underlying processes. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation & perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.

Cronbach's alpha, also known as rho-equivalent reliability or coefficient alpha, is a reliability coefficient and measure of internal consistency of tests and measures.

<span class="mw-page-title-main">Personality test</span> Method of assessing human personality constructs

A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.

In statistics and research, internal consistency is typically a measure based on the correlations between different items on the same test. It measures whether several items that propose to measure the same general construct produce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.

<span class="mw-page-title-main">Operationalization</span> Part of the process of research design

In research design, especially in psychology, social sciences, life sciences and physics, operationalization or operationalisation is a process of defining the measurement of a phenomenon which is not directly measurable, though its existence is inferred from other phenomena. Operationalization thus defines a fuzzy concept so as to make it clearly distinguishable, measurable, and understandable by empirical observation. In a broader sense, it defines the extension of a concept—describing what is and is not an instance of that concept. For example, in medicine, the phenomenon of health might be operationalized by one or more indicators like body mass index or tobacco smoking. As another example, in visual processing the presence of a certain object in the environment could be inferred by measuring specific features of the light it reflects. In these examples, the phenomena are difficult to directly observe and measure because they are general/abstract or they are latent. Operationalization helps infer the existence, and some elements of the extension, of the phenomena of interest by means of some observable and measurable effects they have.

Construct validity concerns how well a set of indicators represent or reflect a concept that is not directly measurable. Construct validation is the accumulation of evidence to support the interpretation of what a measure reflects. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence such as content validity and criterion validity.

In psychometrics, predictive validity is the extent to which a score on a scale or test predicts scores on some criterion measure.

In psychometrics, criterion validity, or criterion-related validity, is the extent to which an operationalization of a construct, such as a test, relates to, or predicts, a theoretical representation of the construct—the criterion. Criterion validity is often divided into concurrent and predictive validity based on the timing of measurement for the "predictor" and outcome. Concurrent validity refers to a comparison between the measure in question and an outcome assessed at the same time. Standards for Educational & Psychological Tests states, "concurrent validity reflects only the status quo at a particular time." Predictive validity, on the other hand, compares the measure in question with an outcome assessed at a later time. Although concurrent and predictive validity are similar, it is cautioned to keep the terms and findings separated. "Concurrent validity should not be used as a substitute for predictive validity without an appropriate supporting rationale." Criterion validity is typically assessed by comparison with a gold standard test.

Constructive realism is a branch of philosophy, specifically the philosophy of science. It was developed in the late 1950s by Jane Loevinger and elaborated in the 1980s by Friedrich Wallner in Vienna. In his paper abstract on constructive realism, Wallner describes it as follows:

<span class="mw-page-title-main">Quantitative psychology</span> Field of scientific study

Quantitative psychology is a field of scientific study that focuses on the mathematical modeling, research design and methodology, and statistical analysis of psychological processes. It includes tests and other devices for measuring cognitive abilities. Quantitative psychologists develop and analyze a wide variety of research methods, including those of psychometrics, a field concerned with the theory and technique of psychological measurement.

<span class="mw-page-title-main">Paul E. Meehl</span> American psychologist (1920–2003)

Paul Everett Meehl was an American clinical psychologist, Hathaway and Regents' Professor of Psychology at the University of Minnesota, and past president of the American Psychological Association. A Review of General Psychology survey, published in 2002, ranked Meehl as the 74th most cited psychologist of the 20th century, in a tie with Eleanor J. Gibson. Throughout his nearly 60-year career, Meehl made seminal contributions to psychology, including empirical studies and theoretical accounts of construct validity, schizophrenia etiology, psychological assessment, behavioral prediction, and philosophy of science.

Lee Joseph Cronbach was an American educational psychologist who made contributions to psychological testing and measurement.

Anthony F. Gregorc is an American who has taught educational administration. He is best known for his disputed theory of a Mind Styles Model and its associated Style Delineator. The model tries to match education to particular learning styles, as identified by Gregorc.

<span class="mw-page-title-main">Construct (philosophy)</span> Object whose existence depends upon a subjects mind

In philosophy, a construct is an object which is ideal, that is, an object of the mind or of thought, meaning that its existence may be said to depend upon a subject's mind. This contrasts with any possibly mind-independent objects, the existence of which purportedly does not depend on the existence of a conscious observing subject. Thus, the distinction between these two terms may be compared to that between phenomenon and noumenon in other philosophical contexts and to many of the typical definitions of the terms realism and idealism also. In the correspondence theory of truth, ideas, such as constructs, are to be judged and checked according to how well they correspond with their referents, often conceived as part of a mind-independent reality.

Test validity is the extent to which a test accurately measures what it is supposed to measure. In the fields of psychological testing and educational testing, "validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests". Although classical models divided the concept into various "validities", the currently dominant view is that validity is a single unitary construct.

Test construction strategies are the various ways that items in a psychological measure are created and decided upon. They are most often associated with personality tests but can also be applied to other psychological constructs such as mood or psychopathology. There are three commonly used general strategies: inductive, deductive, and empirical. Scales created today will often incorporate elements of all three methods.

A forgiveness scale is a psychological test that attempts to measure a person's willingness to forgive. A true definition of forgiveness is debated by many researchers, yet Hargrave suggests that forgiveness refers to releasing resentment towards an offender.

References

  1. Preckel, Franzis; Brunner, Martin (2017), "Nomological Nets", in Zeigler-Hill, Virgil; Shackelford, Todd K. (eds.), Encyclopedia of Personality and Individual Differences, Springer International Publishing, pp. 1–4, doi:10.1007/978-3-319-28099-8_1334-1, ISBN   9783319280998
  2. Cronbach, L.J.; Meehl, P.E. (1955). "Construct validity in psychological tests". Psychological Bulletin. 52 (4): 281–302. doi:10.1037/h0040957. hdl: 11299/184279 . PMID   13245896. S2CID   5312179.
  3. Liu, Liping; Li, Chan; Zhu, Dan (2012). "A New Approach to Testing Nomological Validity and Its Application to a Second-Order Measurement Model of Trust". Journal of the Association for Information Systems. 13 (12): 950–975. doi: 10.17705/1jais.00320 .
  4. Alavi, M, Archibald, M., McMaster, R. Lopez, V. and Cleary, M. (2018) Aligning theory and methodology in mixed methods Research: Before Design Theoretical Placement International Journal of Social Research Methodology 21:5, 527-540