Anthony Gregorc

Last updated
Anthony F. Gregorc
NationalityAmerican
Alma mater Miami University (Ohio), Kent State University (Ohio).
Known for Mind Styles Model
Scientific career
Fields Phenomenology
InstitutionsGregorc Associates Inc. [1]

Anthony F. Gregorc is an American who has taught educational administration. He is best known for his disputed theory of a Mind Styles Model and its associated Style Delineator. [2] The model tries to match education to particular learning styles, as identified by Gregorc.

Contents

Career

Gregorc obtained a B.S. degree from Miami University and an M.S. degree and a Ph.D. degree from Kent State University. He has taught mathematics and biology and has been principal of a laboratory school for gifted youth. He was an associate professor of education administration at the University of Illinois at Urbana-Champaign and associate professor of curriculum and instruction at the University of Connecticut. [1] He is president of Gregorc Associates, Inc., in Columbia, Connecticut.

Mind Styles Model and Gregorc Style Delineator

The Gregorc Style Delineator is a self-scoring written instrument that elicits responses to a set of 40 specific words. [3] Scoring the responses will give values for a model with two axes: a "perceptual space duality," concrete vs. abstract, and an "ordering duality," sequential vs. random [4] The resulting quadrants are the "styles":

Descriptions of the characteristics of these styles can be found in the materials available from Gregorc Associates.

A similarly structured (two-axis, four-style) learning style model with rather different axes and interpretation can be seen in the Kolb LSI.

Supporting evidence

The design, conduct, and results of Gregorc's original testing of the validity of his instrument and model are presented in his Development, Technical, and Administration Manual, [5] self-published and sold by Gregorc Associates. Some peer review has since appeared in conventional channels:

With the exception of Joniak and Isakson (1988) and O'Brien (1990), the only other psychometric analysis of the GSD has been limited to Gregorc's (1979) initial assessments made during the instrument's early development in which Gregorc interviewed several hundred participants. He compared the agreement of GSD scores with an untested self-assessment scale to establish the instrument's face validity for each individual (i.e., the instrument's results versus an individual's subjective agreement that their learning style profile tends to fit them). The correlations of the instrument's general results and the subjectively rated agreement attributes were reported to be between .55 and .76. This problematic method was adopted again in a subsequent comparative analysis by the author (Gregorc, 1982c) and also yielded what Gregorc considered positive results--29% strongly agreeing, 57% agreeing, 14% unsure, and none disagreeing. [6]

Review of Gregorc's study

Timothy Sewall, in a comparison of four learning style assessments (Gregorc's, Myers Briggs, Kolb LSI, and an LSI by Canfield) by review of their published supporting studies (i.e., without new experimental work) concluded of Gregorc's design, "the most appropriate use of this instrument would be to provide an example of how not to construct [an] assessment tool." [7]

Studies by others

Reio and Wiswell (2006) report on their own independent study and on those done earlier by O'Brien (1990) and Joniak and Isakson (1988). [8]

Reliability

Internal consistency or reliability concerns whether evidence can show that an instrument is repeatably measuring something (which may be, but is "not necessarily what it is supposed to be measuring" [9] ).

Gregorc (1982c) reported test-retest correlation coefficients of .85 to .88 (measured twice with intervals ranging from 6 hours to 8 weeks) and alpha coefficients of .89 to .93 on all four scales. In this study, the Cronbach's alpha coefficients on all scales or channels ranged from .54 to .68 (CS = .64, CR = .68, AR = .58, AS = .54). This study's alpha coefficients are consistent with those reported by O'Brien (1990) and Joniak and Isakson (1988), which ranged from .51 to .64 and .23 to .66, respectively, on all scales. [10]

For internal consistency reliability estimates, although an alpha level of .70 can be considered "adequate," for the purposes of this study we considered a stricter alpha level of .80 as a "good" cutoff value for our psychometric examination of the GSD (Henson, 2001). [11]

Construct validity

Construct validity concerns whether evidence can show that what the instrument is measuring is at all what the offered theory claims it is (whether each construct in the model "adequately represents what is intended by theoretical account of the construct being measured" [12] ).

The data disconfirmed both the two- and four-factor confirmatory models. In the post hoc exploratory factor analyses, many of the factor pattern/structure coefficients were ambiguously associated with two or more of the four theoretical channels as well. Overall, there was little support for the GSD's theoretical basis or design and the concomitant accurate portrayal of one's cognitive learning style. [13]

[F]ar more work is needed on the GSD if indeed two bipolar dimensions and Gregorc's mediational or channel theory are to be empirically supported and if it is to be appropriately used with samples of adults. [14]

Consistent with Joniak and Isaksen (1988) and O'Brien (1990), the GSD did not display sufficient empirical evidence to validate the instrument's scores or to confirm Gregorc's theoretical interpretation of four learning style channels or two bipolar dimensions. [15]

Supporting evidence, learning style models generally

A report from the UK think-tank Demos reported that the evidence for a variety of learning style models is "highly variable", that "authors are not by any means always frank about the evidence for their work, and secondary sources ... may ignore the question of evidence altogether, leaving the impression that there is no problem here." [16]

Major works

See also

Footnotes

  1. 1 2 Gregorc Associates site Archived 2007-07-07 at the Wayback Machine accessed July 2007
  2. Learning Styles at ThinkQuest.org accessed July 2007
  3. "Gregorc Style Delineator™". gregorc.com.
  4. Gregorc 1984, p. 3.
  5. Gregorc 1984
  6. Reio and Wiswell 2006, p. 492.
  7. Sewall 1986
  8. Reio and Wiswell 2006
  9. Reliability (statistics)
  10. Reio and Wiswell 2006, p. 494.
  11. Reio and Wiswell 2006, p. 495.
  12. Validity (statistics)#Construct validity
  13. Reio and Wiswell 2006, abstract.
  14. Reio and Wiswell 2006, pp. 498-9.
  15. Reio and Wiswell 2006, p. 499.
  16. Hargreaves, D., et al. (2005). About learning: Report of the Learning Working Group. Demos, p. 11.

Related Research Articles

Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.

Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.

<span class="mw-page-title-main">Myers–Briggs Type Indicator</span> Non-scientific personality questionnaire

The Myers–Briggs Type Indicator (MBTI) is a pseudoscientific self-report questionnaire that claims to indicate differing "psychological types". The test attempts to assign a binary value to each of four categories: introversion or extraversion, sensing or intuition, thinking or feeling, and judging or perceiving. One letter from each category is taken to produce a four-letter test result representing one of sixteen possible types, such as "INFP" or "ESTJ".

In statistics and psychometrics, reliability is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions:

"It is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are highly reliable are precise, reproducible, and consistent from one testing occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are usually used to indicate the amount of error in the scores."

Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence described in greater detail below.

Cronbach's alpha, also known as tau-equivalent reliability or coefficient alpha, is a reliability coefficient and a measure of the internal consistency of tests and measures.

The Minnesota Multiphasic Personality Inventory (MMPI) is a standardized psychometric test of adult personality and psychopathology. Psychologists and other mental health professionals use various versions of the MMPI to help develop treatment plans, assist with differential diagnosis, help answer legal questions, screen job candidates during the personnel selection process, or as part of a therapeutic assessment procedure.

<span class="mw-page-title-main">Personality test</span> Method of assessing human personality constructs

A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception, however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales, and self-report questionnaires are highly susceptible to motivational and response distortion ranging from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.

Construct validity concerns how well a set of indicators represent or reflect a concept that is not directly measurable. Construct validation is the accumulation of evidence to support the interpretation of what a measure reflects. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence such as content validity and criterion validity.

Learning styles refer to a range of theories that aim to account for differences in individuals' learning. Although there is ample evidence that individuals express personal preferences on how they prefer to receive information, few studies have found validity in using learning styles in education. Many theories share the proposition that humans can be classified according to their "style" of learning, but differ on how the proposed styles should be defined, categorized and assessed. A common concept is that individuals differ in how they learn.

The Beck Depression Inventory, created by Aaron T. Beck, is a 21-question multiple-choice self-report inventory, one of the most widely used psychometric tests for measuring the severity of depression. Its development marked a shift among mental health professionals, who had until then, viewed depression from a psychodynamic perspective, instead of it being rooted in the patient's own thoughts.

<span class="mw-page-title-main">Structural equation modeling</span> Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.

Lee Joseph Cronbach was an American educational psychologist who made contributions to psychological testing and measurement.

Cognitive style or thinking style is a concept used in cognitive psychology to describe the way individuals think, perceive and remember information. Cognitive style differs from cognitive ability, the latter being measured by aptitude tests or so-called intelligence tests. There is controversy over the exact meaning of the term "cognitive style" and whether it is a single or multiple dimension of human personality. However it remains a key concept in the areas of education and management. If a pupil has a cognitive style that is similar to that of his/her teacher, the chances are improved that the pupil will have a more positive learning experience. Likewise, team members with similar cognitive styles likely feel more positive about their participation with the team. While matching cognitive styles may make participants feel more comfortable when working with one another, this alone cannot guarantee the success of the outcome.

The Belbin Team Inventory, also called Belbin Self-Perception Inventory (BSPI) or Belbin Team Role Inventory (BTRI), is a behavioural test. It was devised by Raymond Meredith Belbin to measure preference for nine Team Roles; he had identified eight of these whilst studying numerous teams at Henley Management College.

The Kaufman Assessment Battery for Children (KABC) is a clinical instrument for assessing cognitive development. Its construction incorporates several recent developments in both psychological theory and statistical methodology. The test was developed by Alan S. Kaufman and Nadeen L. Kaufman in 1983 and revised in 2004. The test has been translated and adopted for many countries, such as the Japanese version of the K-ABC by the Japanese psychologists Tatsuya Matsubara, Kazuhiro Fujita, Hisao Maekawa, and Toshinori Ishikuma.

The multitrait-multimethod (MTMM) matrix is an approach to examining construct validity developed by Campbell and Fiske (1959). It organizes convergent and discriminant validity evidence for comparison of how a measure relates to other measures. The conceptual approach has influenced experimental design and measurement theory in psychology, including applications in structural equation models.

The Narcissistic Personality Inventory (NPI) was developed in 1979 by Raskin and Hall, and since then, has become one of the most widely utilized personality measures for non-clinical levels of the trait narcissism. Since its initial development, the NPI has evolved from 220 items to the more commonly employed NPI-40 (1984) and NPI-16 (2006), as well as the novel NPI-1 inventory (2014). Derived from the DSM-III criteria for Narcissistic personality disorder (NPD), the NPI has been employed heavily by personality and social psychology researchers.

The Planning, Attention-Arousal, Simultaneous and Successive (P.A.S.S.) theory of intelligence, first proposed in 1975 by Das, Kirby, and Jarman (1975), and later elaborated by Das, Naglieri & Kirby (1994) and Das, Kar & Parrilla (1996), challenges g-theory, on the grounds that the brain is made up of interdependent but separate functional systems. Neuroimaging studies and clinical studies of individuals with brain lesions make it clear that the brain is modularized; for example, damage to a particular area of the left temporal lobe will impair spoken and written language's production. Damage to an adjacent area will have the opposite impact, preserving the individual's ability to produce but not understand speech and text.

Computational psychometrics is an interdisciplinary field fusing theory-based psychometrics, learning and cognitive sciences, and data-driven AI-based computational models as applied to large-scale/high-dimensional learning, assessment, biometric, or psychological data. Computational psychometrics is frequently concerned with providing actionable and meaningful feedback to individuals based on measurement and analysis of individual differences as they pertain to specific areas of enquiry.

References